I’m no stranger to the storage industry so when Micron asked me to contribute to the NVMe conversation, I couldn’t say no. In this post, I will look at what Non-Volatile Memory (NVM) Express (NVMe) is, what drove the need for it and why you should be planning for it in your data center. First, a little background.
Data Center ChallengesUsers expect their applications to respond immediately. Today's applications require faster server processors, more compute cores, more memory and even more storage resources (to deliver better responsiveness, space, and consistent performance).
Figure 1: Application and Server Storage I/O Resources
Fast applications rely on having frequently accessed data as close as possible to the processor itself (see figure-1). Servers are leveraging larger amounts of faster local storage to augment more traditional external shared storage, thereby enabling fast, converged server storage hardware and software.
PCIe Generation Improvements
PCIe, the interconnect bus closest to the server CPU, has evolved to provide more bandwidth and lower latency. Improvements to PCIe Generation 3 (Gen3) have made it capable of supporting faster processors with more cores and more traffic, satisfying the needs of faster applications.
Hardware Software I/O Highway Gridlock
Enterprise applications rely on high-performance servers that access NVM flash Solid State Device (SSD) storage via fast, low-latency I/O roadways (e.g. PCIe) as efficiently as possible. But, legacy server storage I/O software protocols and interfaces such as AHCI (SATA) and serial attached SCSI (SAS) are not capable of unlocking that full potential.
Fast hardware requires fast software and vice versa. PCIe is currently the fastest I/O data highway available. However, the software protocols defining the traffic flow (the rules of the road) need improvement. Due to these historical protocol limitations, applications are not able to fully utilize available hardware resources. Which leads me NVMe.
Leveraging PCIe, NVMe enables modern applications to reach their potential using high-performance servers with local flash storage via fast I/O data highways. While modern I/O highways (PCIe) and devices (flash SSD) improved, a new, optimized and efficient protocol (NVMe) is needed to control I/O data traffic flow at breakneck speed.
Note that NVMe does not replace SAS or SATA, they all can and will co-exist (figure-2) for years to come, enabling different tiers of server storage I/O performance tailored to different requirements inside the same platform aligning applicable technology (SATA, SAS and NVMe) to meet different performance and cost parameters.
Figure 2: SATA, SAS and NVMe
The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity. In addition to enabling more IOPs at a lower latency, NVMe also unlocks the bandwidth of PCIe and associated NVM flash SSD storage to move more data quicker.
However there is another benefit of NVMe that does not get discussed enough and this is all of the I/O improvements (more work being done, data moved and with less wait time) accomplished using less processor CPU time.
Similar to the way modernized vehicle traffic flow protocols on a highway reduce congestion (wait time and latency), NVMe unlocks the potential of NAND flash SSDs via the most effective use of the PCIe I/O data highway. Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.
What This All Means
Planning is an essential ingredient for any enterprise data center. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications.
NVMe provides both flexibility and compatibility enabling it to be at the ‘top tier’ of storage access and take full advantage of the inherent speed and low latency of flash and multi-core processor servers for fast applications. NVMe removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished.
The NVMe benefit is that your applications can process more (transactions per second (TPS), files, frames, videos, images, objects or other items) in a given amount of time, spend less time waiting, and use less CPU overhead. What this means is that you can boost productivity and get more value out of your hardware and software license investment.
Now is the Time
Learn more about NVMe and how it can be leveraged in your data center to enable fast applications and unlock the value in your fast server, storage and I/O resources. You can read my entire paper on the subject here.
The question of NVMe in your data center is not if but rather, when, where, and how. Now is the time to start planning for tomorrow.
Ok, nuff said.
About the AuthorGreg Schulz is Founder and Sr. Consulting Analyst of independent IT advisory consultancy firm Server StorageIO and UnlimitedIO LLC (e.g. StorageIO®). He has worked in IT for an electrical utility, financial services, and transportation firms in roles ranging from business applications development to systems management, architecture, strategy and capacity planning. Mr. Schulz is the author of the Intel Recommended Reading List books “Cloud and Virtual Data Storage Networking” and “The Green and Virtual Data Center” via CRC Press and “Resilient Storage Networks” (Elsevier). Greg is a Microsoft MVP and seven-time VMware vExpert. Learn more at www.storageio.com and www.storageioblog.com. Follow on Twitter @StorageIO.