2014 Enterprise SSD Year in Review – Some Things Do Change

By Doug Rollins - 2015-01-12

As we move into 2015, I’m struck by how much has changed in the enterprise SSD world since my first foray into SSDs many years ago. Back then, we were all looking at small (16GB or so) SSDs that were relatively affordable, as well as larger ones that, frankly, were not. We were constrained by the interfaces on those early enterprise SSDs: SATA and SAS (and some fibre channel) were pretty much it.  We attached them to conventional, HDD-focused ports on HBAs or RAID cards, and we accessed data through historical storage protocols.  Even with clear limitations, all of this was still a huge improvement over what we had before.

Over the ensuing years, a tidal wave of change has influenced enterprise SSDs and how we use them.  And over the last year, the impact has felt the strongest—bringing some extremely important changes for enterprise SSDs. Take a look at my top four:

4. Acquisition Costs Dropped While Performance Improved

This trend has been brewing for a few years, but it really hit stride in 2014: We now have a far better understanding of the workloads that applications place on storage devices, and we know that some applications access storage randomly, and in very small ”chunks,” while others tend to do the opposite; we know that a given application may be very latency-sensitive, while another may not; and we know that some applications are very write-intensive while others are far more read-focused.  Our better understanding has enabled us to design, develop, and deploy enterprise SSDs that are optimized for specific workloads, freeing IT design teams from the limitations of a ”one-size-must-fit-all-applications” platform approach.  And what is most remarkable is that despite using more economical materials, these tuned designs have enabled far higher performance and lower latency than any prior generation of enterprise SSDs.

3. Capacity Continued to Grow, Year Over Year

Better understanding of workloads has enabled us to better optimize SSD designs, in particular, with the use of MLC NAND in the enterprise.  No longer do we have to trade off lower capacity (SLC NAND) to get better performance and lower latency.  We’ve been able to more than double the user capacity by moving to MLC-based designs, keeping pace with enterprise workloads by enhancing the media as well as the SSD controller and firmware. 

2. New/Better Form Factors Emerged

Inarguably, PCIe is one of the most capable storage-to-host interconnects available.  If your focus is low  latency and optimal performance, PCIe is the way to go.  But until recently, it was very difficult to implement hot-swap capability in PCIe.  In January of 2013, the SSD Form Factor Working Group approved version 1.0a of the electrical and mechanical specification for PCIe SSDs (2.5-inch and 3.5-inch hard drive form factors) for hot swap support in PCIe.  2014 saw a far broader choice of standards-based, hot-swap PCIe SSDs in these common HDD form factors, enabling system designers to have their choice of form factor, capacity, features, and suppliers.

1. Nonvolatile Storage-Optimized Protocol Took Hold

The best news for enterprise SSDs in 2014 is that Nonvolatile Memory Express (also known as NVM Express or NVMe) is here, it’s real, and it’s leading the charge for 2015.   

Why Is NVMe Such a Big Deal?

Simple.  NVMe is built from the ground up for enterprise systems without the baggage that legacy interfaces like SAS and SATA bring.  If a host uses PCIe-based SSDs, NVMe is the optimal storage protocol.  It’s far more efficient than legacy protocols, offers extremely low latency, and is purpose-built to provide far more flexibility in command and data management.

Who Is Behind NVMe?

The working group includes dozens of well known memory, storage, and system companies from across the industry.  The founding principle was to define a new storage-to-host protocol to enable the maximum potential provided by nonvolatile memory-based storage technologies—like PCIe SSDs—using an open, standard approach.  This would, in turn, enable a far broader ecosystem and help ensure interoperability.

Why NVMe and Why Now? 

Due to the performance disparity between host processing resources (like faster CPUs, more CPU cores, and faster DRAM) and storage resources (including the fastest available SAS and SATA SSDs to date), the storage side of the platform needed a complete overhaul.  By 2014, there’s no reason to bring forward the legacy of storage devices from a decade ago.  Instead, the NVMe working group started fresh—looking not at what had already been done, but at what could be done.  By starting over from scratch, NVMe offers the best of today with a clear roadmap moving forward.  With broad support from a variety of infrastructure providers (platforms, storage devices, connectors, software, and the like) NVMe is real now—and shipping.  If you’re looking for the best storage device ROI of 2014, I do believe that hat goes to NVMe.

What are your thoughts on NVMe? Does NVMe top your list of 2014’s enterprise storage innovations? Please drop me a comment below!

Doug Rollins

Doug Rollins

Doug Rollins is a principal technical marketing engineer for Micron's Storage Business Unit, with a focus on enterprise solid-state drives. He’s an inventor, author, public speaker and photographer. Follow Doug on Twitter: @GreyHairStorage.