logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Why NVMe Should Be in Your Data Center

Preparing for Tomorrow’s Data Center Today

By Greg Schulz and Doug Rollins

Introduction

This Server StorageIO™ low-latency learning piece, compliments of Micron Technology, is the first in a series that provides guidance pertaining to nonvolatile memory express (NVMe). In this piece, we look at what NVMe is, what drove the need for it and why it should be in your data center in the near future. View companion pieces in this series, along with other content, at www.micron.com.

Data Center Challenges

Users expect their applications to respond immediately. In order to do that, those applications need faster server processors, more compute cores, more memory and even more storage resources (which provide better responsiveness, far more space, and better, more consistent performance). As the underlying hardware capabilities have improved to meet these demands, innovative, application developers have found new ways to take advantage of them.

Fast applications rely on having frequently accessed data as close as possible to the processor itself. This migration of storage into the server platform is a growing trend: software and hardware convergence. Servers are leveraging larger amounts of faster local storage to augment more traditional external shared storage as seen in Figure 1 as well as enabling new application models that leverages fast converged server storage hardware and software. Examples of this ‘in server’ convergence include: Virtual Server Infrastructure (VSI); Virtual Desktop Infrastructure (VDI); Microsoft (Scale Out File Services [SOFS], SQL Server, Exchange, Hyper-V); software-defined and virtual storage appliances (SDS and VSAN); and VMware (vSphere, EVO and VSAN). Others include OpenStack cloud, Hadoop big data analytics, along with various content solutions (video, imaging, security and medical) as well as legacy applications.

Figure 1: Application and Server Storage I/O Resources

PCIe Generation Improvements

PCIe, the interconnect bus that is closest to the host CPU, has gradually evolved to provide more bandwidth and lower latency. PCIe is divided into ‘lanes’ (platform designers can choose how many lanes are routed to each PCIe slot). The latest generation (PCIe Gen3) supports about 7.87 Gb/s per lane (e.g., x1), which is double the previous generation. This doubling trend has held true since PCIe Gen2 replaced Gen1:

  x1 x4 x8 x16 x32
PCIe lanes 1 4 8 16 32
PCIe Gen1 (Gb/s) 2 8 16 32 N/A
PCIe Gen2 (Gb/s) 4 16 32 64 N/A
PCIe Gen3 (Gb/s) 7.88 31.51 63.02 126.03 N/A
PCIe Gen3 (GB/s) 0.98 3.94 7.88 15.75 N/A
Future PCIe Gen4 15.75 63.02 126.03 252.06 504.13

Table 1: PCIe Generations and Number of Lane Configurations

Improvements to PCIe Gen3 have made it capable of supporting faster processors with more cores and more traffic, thereby satisfying the needs of faster applications. Even with today’s faster hardware, including NAND flash solid state drives, bottlenecks remain.

Fast hardware requires fast software and vice versa. PCIe is currently the fastest I/O data highway available; however, the software protocols defining the traffic flow (the rules of the road) need improvement. Due to these historical protocol limitations, applications are not able to fully utilize available hardware resources.

Hardware and Software I/O Highway Gridlock

Modern applications rely on high-performance servers that access flash (SSD) storage via fast, low-latency I/O roadways (e.g., PCIe) as efficiently as possible. But, legacy server storage I/O protocols and interfaces such as AHCI (SATA) and serial attached SCSI (SAS) are not capable of unlocking that potential – optimization required that the industry go back to the drawing board because

these legacy I/O traffic control protocols were designed in an era when processors had a relatively small number of cores and systems accessed rotating magnetic hard disk drives (HDDs) with very modest storage I/O demands. Flash-based SSDs using SATA and SAS changed the landscape, provided a performance boost versus slower HDDs, but the full performance benefit of flash devices and PCIe hardware had yet to be fully realized.

NVMe Fundamentals

A fundamental change was needed: A new storage protocol that was designed from the ground up to take full advantage of the capabilities of SSD. That protocol is called NVMe. Leveraging PCIe, NVMe enables modern applications to reach their potential – applications that require the highest-performance servers with access to local flash storage via fast I/O data highways. These modern I/O highways needed new, optimized, efficient protocol (NVMe) to control I/O data traffic flow at breakneck speed. While the physical data highway (PCIe) has improved, however data traffic protocols needed to be modernized to leverage the new hardware.

Figure 2: Fast Converged Servers and Tiered Storage

Note that NVMe does not replace SAS or SATA - they all can and will co-exist for years to come, enabling different tiers of server storage I/O performance tailored to different requirements inside the same platform - aligning applicable technology (SATA, SAS and NVMe) to meet different performance and cost parameters.

Server Storage I/O Evolution

Server storage I/O protocols support concurrent activity, applications place I/O commands into a queue that is processed by device controllers and drivers. But as technology has progressed, it has created a demand for more and deeper queues which can accommodate a larger number of commands as a means of boosting performance and eliminating added latency through command order optimizations.

How NVMe unlocks the potential of flash-based storage can best be appreciated by comparing it to one of the older protocols. SATA allowed for only one command queue capable of holding 32 commands per queue, enabling only limited optimization. NVMe on the other hand enables 65,536 (64K) queues with 64K commands per queue.

Figure 3: SATA, SAS and NVMe

Similar to the way modernized vehicle traffic flow protocols on a highway reduce congestion (wait time and latency), NVMe unlocks the potential of NAND flash SSDs (e.g., traffic source/destination) via far more effective use of the PCIe data highway.

What This All Means

The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time, resulting in greater application productivity.

NVMe has been designed from the ground up with more and deeper queues, supporting a larger number of commands in those queues. This in turn enables the SSD to better optimize command execution for much higher concurrent IOPS. NVMe will co-exist along with SAS, SATA and other server storage I/O technologies for some time to come. But NVMe will be at the top tier of storage as it takes full advantage of the inherent speed and low latency of flash while complementing the potential of multi-core processors that can support the latest applications.

Like the robust PCIe physical server storage I/O interface it leverages, NVMe provides both flexibility and compatibility. It removes complexity, overhead and latency while allowing far more concurrent I/O work to be accomplished.

Those on the cutting edge will embrace NVMe rapidly. Others may prefer a phased approach.

Some environments will initially focus on NVMe for local server storage I/O performance and capacity available today. Other environments will phase in emerging external NVMe flash-based shared storage systems over time.

Planning is an essential ingredient for any enterprise. Because NVMe spans servers, storage, I/O hardware and software, those intending to adopt NVMe need to take into account all ramifications. Decisions made today will have a big impact on future data and information infrastructures.

Key questions should be, how much speed do your applications need now, and how do growth plans affect those requirements? How and where can you maximize your financial return on investment (ROI) when deploying NVMe and how will that success be measured?

Learn more at www.micron.com/storage.

About the Authors

Greg Schulz is Founder and Sr. Advisory Analyst of independent IT advisory consultancy firm, Server StorageIO (StorageIO). Learn more at www.storageio.com and @StorageIO.

Doug Rollins is a Senior Technical Marketing Engineer, Enterprise Solid State Drives, for Micron Technology’s Storage Business Unit (follow @GreyHairStorage and @MicronStorage).


All trademarks are the property of their respective companies and owners. The Server and StorageIO (StorageIO) Group makes no expressed or implied warranties in this document relating to the use or operation of the products and techniques described herein. StorageIO in no event shall be liable for any indirect, inconsequential, special, incidental or other damages arising out of or associated with any aspect of this document, its use, reliance upon the information, recommendations, or inadvertent errors contained herein. Information, opinions and recommendations made by StorageIO are based upon public information believed to be accurate, reliable, and subject to change. Refer to StorageIO privacy and Disclosure policy here. This industry trends and perspective white paper is compliments of Micron www.micron.com.