logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

New Path to Storage I/O Performance and Resiliency With NVMe

Preparing for Tomorrow’s Data Center Today

By Greg Schulz and Doug Rollins

Introduction

This Server StorageIO™ low-latency learning piece, compliments of Micron Technology, is the fourth in a series that provides guidance pertaining to nonvolatile memory express (NVMe) and how this technology can help you prepare today for the data center of tomorrow. In this piece, we look at how NVMe combines performance and resiliency with the dual-port capability to enable redundancy data paths between fast servers and fast flash storage. View companion pieces in this series, along with other content, at www.micron.com.

Figure 1: Application and Server Storage I/O Resources

Data Center Trends

Today’s applications require fast software, servers and storage as well as I/O technologies. But to become part of a mission-critical infrastructure, multi-threaded server storage I/O software stacks are also needed to improve resiliency, provide fault-isolation and containment, and eliminate single points of failure. As a result, NVMe must offer more than just a boost in performance.

NVMe Paths to Performance and Resiliency

High-performance servers need fast and resilient storage I/O paths to maintain productivity. NVMe’s resiliency comes from its dual-port architecture, which provides multiple paths between servers and storage I/O devices.

Returning to the road traffic analogy from earlier pieces in this series, a data highway with a couple of additional lanes is insufficient. The roadway will eventually become blocked, either due to an accident or severe congestion. Thus, hard shoulders are being added to allow broken-down vehicles to be removed from the main traffic flow and emergency vehicles to rush ahead to a trouble spot when necessary.

NVMe’s dual-path connectivity works in a similar manner. Like serial-attached SCSI (SAS), solid state drives (SSDs) and hard disk drives (HDDs) that support dual paths, NVMe has one primary path and one alternate path. If one path fails, traffic keeps flowing steadily through the other path.

Those already familiar with the dual-path capabilities of SAS, as well as SSD and PCIe technology, can leverage previous experience to gain a head start in selecting NVMe deployment options and configuration topologies.

With dual-path NVMe, SAS-savvy administrators should experience reduced learning times and be more comfortable implementing projects, despite dealing with a new technology; they should be able to harness more quickly the performance of NVMe along with deployment flexibility and dual-path resiliency.

Overcoming I/O Path Bottlenecks

SAS and SATA disk technology have been stalwarts of the storage industry for many years, and will undoubtedly be part of most environments for many years to come. But SATA only enables a single path with one command queue that can hold 32 commands. SAS technology improved upon that capability significantly by offering a queue with 64,000 (64K) command entries and dual-port connectivity. Many data centers use SAS storage devices (HDD and SSD) for performance and SATA devices for lower-cost applications including backup, bulk and archive, as well as low-cost, web-scale storage. Because of its performance capabilities, SAS has evolved into a higher tier of storage than SATA.

Figure 2: Comparing AHCI/SATA, SAS and NVMe

These technologies come from an era when fewer cores were in processors, fewer and slower HDDs connected to a server (or storage system), and SSDs were largely unknown. While 12 Gb/s SAS and 6 Gb/s SATA have enabled faster flash-based SSDs, the full performance benefit of flash was not yet realized, even with faster PCIe.

Unlocking Server Performance With NVMe

When multi-core, multi-threaded, high-RAM, SSD-enabled server technology is used in conjunction with the latest breed of applications, it is not surprising that many of these applications experience slow-downs, lengthy queueing, blockage and limited tenancy.

Demanding applications and higher tiers of storage are going to gravitate toward NVMe as the storage I/O capabilities of flash can be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time. This will add up to higher speeds, eliminating queues and potential blockages and enabling better support for multi-tenancy.

While the performance gains of NVMe have received the most praise, its resilience will also resonate with many system architects. Like SAS, NVMe enables the dual-pathing of devices, permitting system architects to design resiliency into their solutions. Figure 3 shows a SATA device without dual-pathing where the loss of a path results in no access to a device. Figure 3 also shows SAS and NVMe devices with a primary path (solid green line) and a secondary path (dotted green line) for resiliency. When the primary path fails (red X), access to the storage device can be maintained with failover so that fast I/O operations can continue when using SAS and NVMe.

Figure 3: Benefit of Dual-Path Resiliency

NVMe Enabling Data Center PACE

NVMe is an enabling technology that can help address performance, availability, capacity and economics (PACE), as described below.  

Performance – NVMe enables more concurrent, multi-threaded applications that perform significantly better by being able to service a larger number of queues and commands per queue. This is accomplished while reducing latency and leveraging underlying flash SSD technology, resulting in improved productivity.

Availability – Multi-pathing enables NVMe devices to have resilient paths between the server and flash storage devices, similar to the capabilities of SAS. In-the-box drivers and utilities also simplify the management of NVMe devices with improved accessibility.

Capacity – NVMe builds on the scaling capabilities of SAS and SATA to support more flash and larger flash-enabled devices while harnessing flexible PCIe connectivity to support performance and availability.

Economics – Leveraging familiarity with SAS and SATA, NVMe reduces complexity and risk while removing costs and maximizing investment in hardware, software and personnel resources. The economic benefit of being able to unlock more value from hardware to accelerate applications can enable enterprises to improve their return on technology investment while also reducing their total cost of ownership. The result is a cost-effective, resilient, flexible and scalable data infrastructure with the ability to improve application productivity.

NVMe Configuration Topology Options

In addition to performance and availability, NVMe is also flexible for different configuration deployment scenarios. Initially, NVMe is being deployed inside servers as fast, low-latency back-end storage using PCIe add-in cards (AIC) and flash drives.

Figure 4: Where NVMe Can Be Found

Figure 4 shows NVMe devices, including flash AIC, in storage systems and appliances as back-end storage, co-existing with SAS or SATA devices. Another emerging deployment configuration scenario is shared NVMe direct-attached storage (DAS) with multiple-server access via PCIe external storage with dual-paths for resiliency.

Figure 5: NVMe Devices in Shared Storage

Because it is a protocol that can be deployed on different interfaces and transports, similar to the SCSI command set, NVMe also has the flexibility to be deployed on low-latency fabric networks. For example, using RDMA over Converged Ethernet (RoCE) or InfiniBand-based networks allows NVMe to be deployed beyond the distance limits of a physical rack or cabinet, instead spanning a data center. This is similar to how the SCSI Fibre Channel Protocol (SCSI_FCP) is commonly used today.

Summary

NVMe is one of those rare, generational protocol upgrades that comes around every couple of decades to help unlock the full performance value of servers and storage. Just as the supercomputer of 20 years ago is no better than the average laptop today, NVMe could help push new server storage I/O architectures and applications to even greater heights.

The flexibility of NVMe allows it to be immediately leveraged in existing environments while opening the door to new architectures and possibilities. Those already familiar with PCIe, SAS and SATA can get to know it rapidly and harness it in their data centers to address current trouble spots and bottlenecks. By that time, the vendor community will be ripe with new, exciting ways to utilize NVMe.

Just as server computer performance has increased, so too has the speed of low-latency, flash-based SSD storage. Now is the time to upgrade the server storage I/O protocols that define how fast data highways are used.

Learn more at www.micron.com/storage

About the Authors

Greg Schulz is Founder and Sr. Advisory Analyst of independent IT advisory consultancy firm Server StorageIO (StorageIO) (www.storageio.com; @StorageIO).

Doug Rollins is a Senior Technical Marketing Engineer, Enterprise Solid State Drives, Micron Storage Business Unit (@GreyHairStorage and @MicronStorage).


All trademarks are the property of their respective companies and owners. The Server and StorageIO (StorageIO) Group makes no expressed or implied warranties in this document relating to the use or operation of the products and techniques described herein. StorageIO in no event shall be liable for any indirect, inconsequential, special, incidental or other damages arising out of or associated with any aspect of this document, its use, reliance upon the information, recommendations, or inadvertent errors contained herein. Information, opinions and recommendations made by StorageIO are based upon public information believed to be accurate, reliable, and subject to change. Refer to StorageIO privacy and Disclosure policy here. This industry trends and perspective white paper is compliments of Micron www.micron.com.