logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

How NVMe Will Revolutionize Server and Storage I/O

Preparing for Tomorrow’s Data Center Today

By Greg Schulz and Doug Rollins

Introduction

This Server StorageIO™ low-latency learning piece, compliments of Micron Technology, is the third in a series that provides guidance pertaining to nonvolatile memory express (NVMe) and how this technology can help you prepare today for the data center of tomorrow. Fast applications need fast software, servers, storage and I/O to be productive and cost effective. In this particular piece, we look at how NVMe will revolutionize server and storage I/O performance and connectivity while removing barriers to productivity. View companion pieces in this series, along with other content, at www.micron.com.

Figure 1: Application and Server Storage I/O Resources

Data Center Trends

The amount of data being stored and accessed continues to increase. There appears to be no stopping the demand toward faster processing, larger-capacity memories and storage, along with lower latency (response time). New and existing applications (Figure 1), along with their enabling software tools, are helping to drive the sustained demand for more data and timely access to it.

The list of new applications and the storage I/O demands is almost endless: Big data analytics tools (Hadoop, SAS, Hortonworks, SAP HANA and others); databases and key value stores (MySQL, SQL Server, Oracle, Aerospike, Cassandra, Riak, TokuDB and others); big fast data such as video or imagery; and legacy structured and unstructured data (from financial, healthcare, government, geology and energy). Each has a need for speed that is placing heavy demand on the underlying infrastructure to transform data into information in near real time.

While applications and processors have been developed that can now operate at this velocity, the surrounding infrastructure has lagged. To eliminate delays, various strategies have evolved: the addition of more RAM; faster caching technology; and better server storage I/O data infrastructures using nonvolatile memory (NVM) NAND flash solid state drives (SSDs).

In addition, some applications are being located closer to the shared storage to reduce the time required to move large amounts of data between storage devices. In other cases, the storage has been moved closer to the applications (e.g., server-side storage, direct-attached storage). The goal is to achieve a locality of reference to reduce the impact of server storage I/O on applications.

But everything is not the same in data centers. Applications and workloads have different requirements so the underlying technology infrastructure must be able to adapt. Some applications need the highest performance possible while others are more focused on capacity and cost minimization. But it is far from being a black and white, one-fits-all proposition.

Data Center Challenges

The demands of modern applications and virtualized workloads have created data center bottlenecks. A web server, virtual desktop, small database server or file server may not exert much of a performance impact by themselves. But when you pack all of these and more onto one physical server, performance issues arise; the result: aggregation causes aggravation.

That said, data centers are averse to risk and disruption. They require reliable technologies,  but look to avoid too much change at once. That’s why investments in existing technologies, tools, techniques and skillsets must be preserved while, at the same time, introducing solutions to address bottlenecks and improve agility. Today’s server storage I/O protocols such as SCSI (SAS) and AHCI (SATA) have served well and continue to be used for the high-volume, lower-cost tier of server storage I/O access. Likewise, improvements in physical interfaces such as PCIe Gen3, 10 GbE, 40 GbE, InfiniBand and Fibre Channel (FC) have improved the underlying data highway.

Figure 2: AHCI/SATA, SCSI/SAS and NVMe

However, new rules of the road were needed to leverage these physical improvements in order to boost productivity and relieve gridlock. This required new software-defined protocols to unlock the full potential of today’s faster devices. Hence, NVMe was designed from the ground up to address these issues and accommodate the demands of next-generation applications.

Why NVMe Is Revolutionary

NVMe leverages the experience gained with known block storage access and unlocks the performance capabilities of newer and faster devices. It can be implemented in a variety of topologies and configurations, from dedicated direct-attached storage (DAS) on a server or storage system (e.g., back end) or as a shared front-end alternative to block storage protocols.

While initially being deployed as DAS back-end storage in servers or storage systems accessing fast flash storage, NVMe will eventually be used in other ways. The SCSI command set, for example, gradually found its way from SAS to iSCSI, Fibre Channel Protocol (FCP), and others, both on the back end to access hard disk drives (HDDs) or SSDs, as well as on the front end for servers to access storage systems.

While SATA allowed for one command queue capable of holding 32 commands, NVMe enables 65,536 (64K) queues with 64K commands per queue. As a result, the storage I/O capabilities of flash can now be fed across PCIe much faster,  enabling modern multi-core processors to complete more useful work in less time.

Like the robust PCIe physical server storage I/O interface it leverages (e.g., the data highway), NVMe provides both flexibility and compatibility, while removing complexity, overhead, and latency. Those on the cutting edge will embrace NVMe rapidly. Others will prefer a gradual approach, focusing on NVMe for local server storage I/O performance and capacity.

Some environments, however, might phase in emerging external NVMe-based shared storage systems. Over time we can also expect to see NVMe being deployed across more servers and inside storage systems (or appliances) to access fast NVM flash-based storage.

Why NVMe Is Beyond Evolutionary

On the surface, NVMe might seem like little more than an evolutionary speed bump, something we have seen over and over in the world of IT for the past several decades. NVMe is one of those rare generational breakthroughs (versus a “revolutionary” breakthrough). Here’s why.

Evolutionary improvements involve going from one speed to another, adding some new functionality, changing low-level encoding or some other attribute to improve on the base architecture, design or implementation strategy. For example, Ethernet going from 10/100 to 1 GbE to the emerging 5 GbE twisted pair, and to 10 GbE, 40 GbE and 100 GbE using copper or optical cabling. Similarly, the SCSI command set has been upgraded from the bulky parallel cables of prior decades to a serial command set implemented on Fibre Channel Protocol (FCP) or FCoE, InfiniBand (SRP), IP (iSCSI) or SAS. And in recent years, PCIe has seen a steady improvement in terms of more lanes and better performance as it transitioned from Gen1 to Gen2 and Gen3. While significant changes, all of these represent evolutionary shifts.

NVMe, on the other hand, is not just another upgrade to an existing command set, protocol or interface. Instead, it is a brand new protocol designed specifically for the needs of higher core, faster processors that run the latest generation of high-speed applications, while unlocking the latent value of faster storage media such as NAND flash SSDs.

Fast Servers and Storage Need Fast NVMe I/O

Processor, memory and NVM (e.g., flash storage) technology have raced forward over the last several years. Unfortunately, existing server and storage I/O protocol architectures have created a major roadblock on the data highway. Think of it like the original single-lane Route 66 highway from Chicago to Los Angeles implemented a half-century ago, yet having to deal with modern-day traffic activity. The congestion and delays would be unimaginable.

People would develop workarounds, such as only driving during off-peak hours or moving closer to the office to avoid a long commute. Authorities would institute policies to reduce gridlock such as certain vehicles only being allowed to drive on specific days or a congestion charge to inhibit volume. But those types of efforts would do little to improve the situation. For the data highway, NVMe is the ultimate solution. It opens up the underlying PCIe infrastructure using new policies and protocols. This would be similar to transforming old Route 66 into a six-lane freeway. The result is that accelerated applications can more effectively use more of the available hardware resources. From a financial perspective, NVMe enables applications to realize their full potential and maximize ROI on server, storage and I/O technology investments.

Revolutionary technologies have a reputation for disrupting customer environments by requiring massive changes or causing delays while waiting for new operating systems, hypervisors, file system management tools, and device drivers to support the hardware devices. NVMe does require new drivers, but once in place, it plugs and plays seamlessly with existing tools, software and user experiences. NVMe-enabled devices have an embedded NVMe controller (software that implements the protocol inside the device).

Even though NVMe is a new protocol, it leverages existing skillsets. Anyone familiar with SAS/SCSI and AHCI/SATA storage devices will require little or no training to implement and manage NVMe. Since NVMe-enabled storage appears to a host server or storage appliance as a LUN or volume, existing Windows, Linux and other OS or hypervisor tools can be utilized.

On Windows, for example,  other than going into the device manager to see what the device is and what controller it is attached to, it is no different than installing and using any other storage device. The experience on Linux is similar, particularly when using in-the-box drivers that ship with the OS. One minor Linux difference of note is that instead of seeing a /dev/sda device as an example, you might see a device name like /dev/nvme0n1.

The deployment of NVMe will also help storage managers become comfortable with it. Initially, it is being deployed as back-end storage inside or directly attached to servers, in place of or as companions to SAS/SCSI, AHCI/SATA or traditional PCIe and m.2 cards. From there, NVMe will find its way into storage appliances and data servers, even if they are accessed using traditional SAN, NAS or object access methods.

Also emerging are shared external NVMe-attached storage appliances using PCIe connections and switches for in-the-rack shared direct access to high-performance storage. Further out on the horizon is NVMe over fabric using emerging RoCE (pronounced Rocky) technology which comprises Remote Direct Memory Access (RDMA) over Converged Ethernet. Think of NVMe over fabric as an alternative to SCSI on Fibre Channel (e.g., FCP), SRP or iSCSI with the benefit of lower latency, higher I/O rates and improved productivity.

Summary, Recommendations and Tips

Now is the time to establish a strategy that encompasses where, how, when, and why you will remove data center and server storage I/O bottlenecks with NVMe. NVMe should be part of your hardware and software strategy. Now is the time to devise that strategy and refine your NVMe plans.

About the Authors

Greg Schulz is Founder and Sr. Advisory Analyst of independent IT advisory consultancy firm Server StorageIO (StorageIO) (www.storageio.com; @StorageIO).

Doug Rollins is a Senior Technical Marketing Engineer, Enterprise Solid State Drives, Micron Storage Business Unit (@GreyHairStorage and @MicronStorage).


All trademarks are the property of their respective companies and owners. The Server and StorageIO (StorageIO) Group makes no expressed or implied warranties in this document relating to the use or operation of the products and techniques described herein. StorageIO in no event shall be liable for any indirect, inconsequential, special, incidental or other damages arising out of or associated with any aspect of this document, its use, reliance upon the information, recommendations, or inadvertent errors contained herein. Information, opinions and recommendations made by StorageIO are based upon public information believed to be accurate, reliable, and subject to change. Refer to StorageIO privacy and Disclosure policy here. This industry trends and perspective white paper is compliments of Micron www.micron.com.