By Greg Schulz and Doug Rollins
This Server StorageIO™ low-latency learning piece, compliments of Micron Technology, is the second in a series that provides guidance pertaining to nonvolatile memory express (NVMe) and how this technology can help you prepare today for the data center of tomorrow. In this piece, we look at how to get ready for the NVMe server storage I/O wave in order to keep pace with the rising tide of increased data and I/O performance. View companion pieces in this series, along with other content, at www.micron.com.
Data Center Challenges
While the first piece in this series provided a general overview of NVMe and its benefits, this one focuses more on implementation planning and strategy.
There are important decisions you should be making today to set the stage for investment protection and future-proofing server storage I/O acquisitions related to NVMe. Knowing what will be needed enables IT to plan for a smoother transition to an NVMe future. Figure 1 shows a modern server storage I/O architecture model with NVMe.
Figure 1: Server Storage I/O Architectures
SAS, SATA and NVMe
Implementation of NVMe should move forward with a full appreciation of what it is and how it fits in with existing data storage protocols. NVMe is a new protocol alternative to advanced host controller interface (AHCI), also known as SATA, and the SCSI protocol used by serial attached SCSI (SAS). Both SATA and SAS are used today for accessing hard disk drives (HDDs) and solid state drives (SSDs). NVMe has been designed from the ground up for accessing fast storage, including flash SSD leveraging PCI express (PCIe). The benefits include lower latency, improved concurrency, increased performance and the ability to unleash a lot more of the potential of modern multi-core processors.
Figure 2: AHCI/SATA, SCSI/SAS and NVMe
SAS and SATA disk technology has served the storage industry well. They were originally devised in an era when there were fewer cores in processors and there were fewer and slower HDDs connected to a server (or storage system). While 12 Gb/s SAS and 6 Gb/s SATA (e.g., SATA III) have enabled the use of faster flash-based SSDs, the full performance benefit of flash is not being realized, even with faster PCIe. Unlike NVMe, SAS and SATA were not designed with flash SSDs in mind.
NVMe unlocks the potential of flash-based storage by allowing up to 65,536 (64K) queues with 64K commands per queue. SATA allows for only one command queue capable of holding 32 commands, and SAS supports a queue with 64K command entries. These and other improvements with NVMe enable concurrency while reducing latency to remove server storage I/O traffic congestion. The result is that applications demanding more concurrent I/O activity along with lower latency (response time) will gravitate toward NVMe for fast access storage.
This is not to say that existing SAS and SATA technologies are going away anytime soon. SATA and SAS will remain for both HDD and flash SSD high-volume, low-cost infrastructures that need backward compatibility. This means SATA- and SAS-based SSDs will continue to be used IN environments like cloud, service providers, secondary storage or even data protection. So don’t expect NVMe to be an all-conquering force in the data center, at least for a few years. Rather, NVMe will find its way into the data center by way of small to larger enterprise or cloud and web-scale environments co-existing with SAS and SATA devices.
The question is not if NVMe is in your future (it is). The real questions to consider are when will you deploy NVMe, where will you deploy it (server, storage system or appliance, workstation), what device type to use (m.2, 8639 drive, PCIe card), how to use it (as cache or storage device) and what else is needed. Keep in mind, NVMe does not have to be an all or nothing proposition. You can have NVMe your way, when and where needed.
Where to Deploy NVMe Devices
NVMe can be deployed today inside of servers or in storage systems and appliances. The answer to where you deploy it all depends on what you are trying to accomplish. Ask yourself where your applications need that extra horsepower in order to eliminate bottlenecks and unlock the I/O potential of your environment. Perhaps you only need it inside one server to achieve the results you envision. But in other cases, you may need it in multiple servers and storage arrays.
How to Determine Existing I/O Bottlenecks
Before choosing, it is always best to assess your server and storage I/O traffic. Similar to checking for congestion on the highway before deciding on the route for your morning commute, knowing your traffic flows is a crucial factor in determining where and how to deploy NVMe. The best way to investigate this is by using operating system tools such as Windows Perfmon or Spotlight, Linux iotop, or VMware esxtop and virtualesxtop, among other utilities. These types of tools can help you isolate I/O roadblocks and profile your server storage I/O activity.
How to Connect Your NVMe Devices
The simplest way to deploy NVMe is via a PCIe card that plugs directly into an available slot on a server motherboard. Not all servers will be compatible. Those released over the last year or two with PCIe Gen3 slots or better will accept NVMe cards. Some PCIe Gen2 slots may also support NVMe. Bottom line: Look to deploy NVMe on either brand new servers within your data center or those acquired most recently. Those servers will typically have faster processors and more RAM, which makes them better able to take advantage of this new technology.
What if no Slots Are Available Inside the Server
If no slots are available on the motherboard or there is not enough available space inside the server to fit another PCIe card, an alternative is to use the 8639 NVMe connector, which is available inside some server’s storage enclosures. This makes it possible to add NVMe in a similar way to adding a disk drive. The performance via the 8639 connector, which is PCIe x4 may not be as high as using a PCIe card that is x8 or faster. Note that an 8639 port needs an attachment to a server motherboard via a PCIe connector. Pay attention to the number of available PCIe slots and type.
Is the Current Hardware NVMe Ready?
Once you know where you want to deploy NVMe, what is required for deployment is a server (or storage system) with a PCIe slot that physically supports an NVMe-enabled flash card or an NVMe drive form factor device. But it is not enough to have an available PCIe slot. You also need a server with BIOS and firmware that can support NVMe, along with devices that have an NVMe controller. In some cases, manufacturers offer upgrades that can make existing servers NVMe-compliant, but this may not be possible for older servers. Therefore, if you determine that replacement hardware will be required in order to implement NVMe, carefully review your options.
What if Additional Software Is Needed for NVMe?
Whether you need additional software depends on what operating system, hypervisor and version you having running. Some have “in the box” NVMe drivers while others do not. Like on the hardware side, software beyond a particular age may not have been written with NVMe in mind. Major operating systems, including Linux distributions and Windows, as well as hyper-visors such as VMware, have “in the box” drivers or are easily installable.
Summary, Recommendations and Tips
Now is the time to establish a strategy that encompasses where, how, when, and why you will remove data center and server storage I/O bottlenecks with NVMe. The best approach is to get your feet wet with the technology, deploy it on a small scale. Iron out the kinks in terms of necessary hardware and software upgrades, driver availability and application utilization. Then take that knowledge and expand the role of NVMe further into the data center.
About the Authors
All trademarks are the property of their respective companies and owners. The Server and StorageIO (StorageIO) Group makes no expressed or implied warranties in this document relating to the use or operation of the products and techniques described herein. StorageIO in no event shall be liable for any indirect, inconsequential, special, incidental or other damages arising out of or associated with any aspect of this document, its use, reliance upon the information, recommendations, or inadvertent errors contained herein. Information, opinions and recommendations made by StorageIO are based upon public information believed to be accurate, reliable, and subject to change. Refer to StorageIO privacy and Disclosure policy here. This industry trends and perspective white paper is compliments of Micron www.micron.com.