NVMe is breaking the storage bottleneck. You’ve likely heard the tales of its huge bandwidth and latency measured in microseconds instead of milliseconds. It’s close to your computing power and lightning-fast. You’ve heard about how it’s dramatically changing applications and performance expectations. But you may have also caught whispers about needing something new to support NVMe – something about next-gen servers, or a kind of special slot for NVMe drives. Does this mean you can’t get the benefits of NVMe today?
NVMe is available now
The beauty of NVMe is that it requires just three things:
- A PCIe electrical connection – These are standard in any system.
- A driver – Modern operating systems have NVMe drivers built in. Micron also offers drivers for select legacy versions of Linux and Windows.
- An NVMe SSD – Micron happens to make the current industry-leading NVMe PCIe SSD.
In other words, you already have everything you need! So what is all this commotion about needing new systems for NVMe?
A lot of the confusion arises from the misconception that NVMe is a form factor. NVMe is a logic interface for storage – it lives in software and firmware, and rides across a PCIe bus (usually Gen 3) as the electrical connection. It comes in all different shapes and sizes:
- Hot-pluggable U.2, which physically looks like a 2.5” drive
- Add-in cards that drop right into a motherboard slot
- Tiny M.2 sticks that can fit into notebooks and tablets, or be used in dense open-compute-type architectures
- BGA – That’s right, there’s even a mobile chip implementation
Out of these form factors, add-in cards (AIC) are the most widely available for the datacenter today. M.2 is typically not on server motherboards outside of a boot option, but can be aggregated with multiple M.2 cards on one PCIe riser card, which then plugs directly into a motherboard like a standard add-in card. Both AIC and M.2 can be used now.
A lot of the excitement for NVMe centers on the U.2 form factor. It’s a familiar size and shape, and can be accessed and serviced conveniently from the front of a server without powering down a system. Formerly known as SFF 8639, U.2 has actually been commercially available for some time – even predating NVMe – but has usually been a special configuration that has to be ordered. Often only four bays in a server are configured for PCIe/NVMe, with the other slots dedicated to SATA and SAS interfaces. Some server vendors are starting to offer systems with 10, 12, and even 24 U.2 slots. I expect we’ll see more of these in the future.
One reason we’ll see more U.2 slots is new processor architectures are increasing the number of PCIe lanes per CPU. This allows more connectivity for all kinds of networking, GPU and special-purpose PCIe devices, but it will open up even more IO capacity for NVMe storage.
Another change revolves around the flexibility of the U.2 connector. U.2 is actually built to allow SATA, SAS and PCIe to be supported in one unified socket. Unfortunately most system makers don’t cable their connectors this way due to the prohibitive cost of cabling. New backplanes and connector solutions are changing the economics, however, making truly tri-interface bays more feasible. The value of NVMe also continues to increase, driving higher user demand for U.2 bays.
What this means for you
If you aren’t already testing, evaluating and using NVMe, you should be! In our own architecture testing, we’ve seen dramatic increases in performance at compelling value for a wide variety of applications, including virtualization, multiple database and OLTP use cases, object stores like Ceph, NoSQL implementations like Cassandra and a host of other workloads. Chances are, you can take advantage of NVMe now, even with your existing infrastructure. And if you’re investigating new builds, NVMe should be a no-brainer.
Learn more about Micron PCIe NVMe SSDs and connect with Micron on Twitter @MicronStorage and on LinkedIn.