One of the perennial problems of computing is the need for more data storage. The amount of business and personal data generated is massive, and continues to grow—cloud services firm Domo estimates that for every person on earth, 1.7 MB of data will be created every second by 2020. This, in turn, creates demand for more storage, and more dense storage—if a 4TB and 8TB drive both consume 9W of power in operation, the cost savings for power alone drives a demand for more dense storage. This also says nothing of the equipment cost, as well as the rack space and real estate required to deploy these systems—a prominent issue as computing continues to move closer to the edge.

These increased densities come at a cost—for Flash storage, increased densities result in lowered write endurance. While block reads are essentially unlimited, block write endurance decreases as density increases—3D MLC NAND is rated for 6,000 to 40,000 cycles, 3D TLC NAND for 1,000 to 3,000 cycles, and 3D QLC NAND (four-bit) rated for 100 to 1,000 cycles.

Comparatively, QLC NAND SSDs are cheap—from a cost-per-GB viewpoint—though the relative lack of write endurance makes them a poor fit for a variety of applications.

“When people understand the underlying technology of QLC and the endurance the drive are on, it’s an eye-opener,” said Matt Hallberg, senior product marketing manager at Toshiba Memory America. “There’s a major semiconductor company that has a QLC drive, and the endurance is ~0.2 drive writes per day (DWPD). That’s going to require a lot of software overhead to ensure that whatever you are writing to the drives is written sequentially to extend the life of the drive.”

Hallberg is also quick to dismiss the perceived cost advantage of QLC. “The cost savings people think is there… everyone has this expectation of QLC as having a 40% price difference. When the reality is, you’re going from three layers to four layers. That’s not a 40% change… it’s really 20-25%. There are additional things you have to do with QLC, that actually increase your cost depending on how your QLC is implemented.”

SEE: Top five on-premises cloud storage options (free PDF) (TechRepublic)

Performance management for QLC can be done moderately transparently, and there are a number of approaches to this, according to Joseph Unsworth, research vice president at Gartner, who said that “QLC technology is an imperative for NAND suppliers to sustainably reduce costs in the future,” and that enterprise storage environments “will increasingly adopt the technology.”

The first of these is for SSD manufacturers to adopt “advanced flash management techniques.. in order to up-level the chip-level write endurance,” with such attempts resulting in a 3x to 7x improvement, though this depends on balancing priorities of write performance and preserving write endurance.

Storage analytics could be used to monitor drive health and predict impending drive failure.

Another approach is Intel’s H10 SSD, which combines their Optane (3D XPoint) storage-class memory (SCM) with QLC NAND on a single M.2 drive. This design essentially makes the Optane memory essentially a hot cache. Effectively, this model of SCM+QLC is akin to the hybrid drives of yesteryear that combined NAND Flash with traditional platter drives, according to Tim Stammers, senior analyst at 451 Research.

The difficulties of shingled magnetic recording

For the traditional platter drive market, shingled magnetic recording (SMR) is employed to an extent among all three drive manufacturers. SMR drives are slower than conventional drives, and are not precisely drop-in replacements—while there are plug-and-play implementations, called “drive-managed SMR,” Western Digital cautions against this in a blog post: “the background ‘housekeeping’ tasks that the drive must perform result in highly unpredictable performance, unfit for enterprise workloads.”

Western Digital is touting host-managed SMR, in which the host system is responsible for managing data streams, zone management, and I/O operations. This host management necessarily requires support up the stack, and these drives would not readily be used for desktop systems, and would require modestly higher processing abilities from storage appliances to handle these tasks.

Western Digital is upfront about these challenges, noting that the (roughly 16%) additional capacity granted by SMR “isn’t free,” emphasising that “utilizing this capacity requires a commitment on the part of the customer to invest in software development both in the file system and the underlying applications,” adding that “this investment can pay dividends long term since an SMR drive provides lower cost per TB and better total cost of ownership (TCO) when considering the capital and operating cost of the data center.”

“[Cloud vendors] will do the host-side work to support SMR capability, but certainly most other use cases are not ready, they’re not going to be engineered for SMR,” said Scott Wright, director of product marketing at Toshiba.

“Complete confidence” for dual-actuator drives

As densities increase on traditional platter drives, the (Input/Output operations per second) per TB ratio continues to fall. Segate and Western Digital have publicly discussed their intent to move toward dual-actuator hard drives.

This is not the first time this has been attempted, with an attempt in the mid-1990s from Conner Peripherals niche “Chinook” drives having a reputation for premature failure as a consequence of increased vibration causing head collisions. However, modern implementations appear to have “complete confidence” in the industry, according to John Monroe, research vice president at Gartner, who notes that “Necessity is the mother of invention, and at 16TB and above multi-actuators will be a necessity.”

Image: Getty Images/iStockphoto