When it comes to data technology, one of the priority based problems that many of the industry experts are thinking about is the need for more data storage. The amount of enterprise and personal data that is generated is massive and continues to grow over time, legacy data center solutions, or cloud solutions are falling short of the growing demand for data storage. Domo estimates that for every person on earth is all set to generate 1.7 MB of data every second by 2020, it’s, in turn, creates more demand for storage that is cost-effective and dense. If a 4TB and 8TB drive both consume 9W of power during operation, the cost saving of power will be alone driving the demand for more dense storage. The solution of more data storage plan doesn’t provide the value of rack space, equipment cost, and real estate required for deployment; data storage solutions are moving more towards network connectivity rather than on edge.  

The increased data storage comes at a cost that is depended on the flash storage, increased density in storage results in lowering of the writing endurance. The readable blocks are pretty unlimited, while the write block endurance slowly decreases as density increases. The tradeoff between reading and writing is affecting the density of storage solutions, 3D MLC NAND is rated for 6,000 to 40,000 cycles while the 3D QLC NAND four bit rated for 100 to 1,000 cycles and 3D TLC NAND for 1,000 to 3,000 cycles. QLC NAND SSDs are much cheaper when we are comparing the storage solutions in the view of cost-per-GB, the reduced cost comes at functional deficiency wherein they lack endurance that makes them a poor fit for a variety of applications.

Matt Hallberg, Senior Product marketing manager at Toshiba Memory America, added that when people understand the technology that drives the QLC and the endurance factor for them, its an eye opener of the people. In the market, there are major semiconductor businesses that have the QLC drive, and the endurance is approx. 0.2 drive writes per day (DWPD). This is going to require a lot of software overhead to ensure that whatever you are writing is then sequentially extended to the life drive. The added cost advantage that many people think they can achieve with QLC can be just imaginative. Every provider shows that the price difference of OLC with other storage technology is 40 percent, and on reality part, you are going from the three-layer levels to four layer level. There are additional costs incurred with QLC as the implementation of it changes based on the requirement, and we need to understand that endurance is going to play a major role for storage.

Joseph Unsworth, Research Vice President at Gartner who said that the QLC technology would be important for the storage business because if they want to reduce costs for NAND suppliers. Performance management for QLC can be done moderately, and number approaches can be pushed towards it, there would be a significant increase in the adoption of such storage technology. The rising demand is going to push the business to adopt the OLC, and first of these happens to be SSD manufacturers to adopt the advanced flash management techniques. The order to improve the chip level write endurance, there would be many attempts but would be depending on the usage and priorities of the write performance and preserving the write endurance.

Storage analytics could be able to monitor the drive health and predict impending failure that is one of another approach that can be used. Intel’s storage H10 SSD combines the QLC NAND on a single M.2 drive and Optane (3D XPoint) storage class memory (SCM). The design is set to make the Optane memory essentially a hot cache for memory.

The traditional platter drive market has gone ahead with the adoption of the shingled magnetic recording (SMR) that is employed to an extent among all three drive manufactures. SMR is slower compared to other conventional drives and mostly isn’t a complete reference with a drop-in replacement while we see added plug and play implementations called drive-managed SMR. SMR was even recently scrutinized with added conventions where western digital cautioned that the common housekeeping tasks that the drive must perform should result in highly unpredictable performance that is completely unfit for the enterprise workloads. The host managed SMR, in which the host system is responsible for managing data streams, zone management, and I/O operations. The host management needs the added support up to the stack, and such drives would not readily be used for desktop systems and require a modest higher processing ability from storage appliance to handle such tasks. The additional capacity that is gained by the SMR isn’t free with an emphasis on utilizing the capacity will require a commitment on the part of the customer to invest in software development both in the file system and underlying applications.

Conclusion

With an increase in density on traditional platter drives, the input/ output operation per second per TB continues to fall. Seagate and Western Digital have publicly discussed their intent to move forward with dual-actuator hard drives. It is not the first time that this has been attempted by storage solution provider in the mid-1990s during such time Conner peripherals niche Chinook drives having a reputation for premature failure due to reasons such as increased vibration caused due to head collisions. The growing demand for storage solutions with modern implementation techniques to make reliable, and power efficient storage solution. According to John Monroe, Research Vice President at Gartner, who notes that necessity is the mother of all invention and 16TB and above multi-actuators will be a greater necessity when it comes to the storage solution.

To know more, download our latest whitepapers on Data Storage Solutions.