Computing the Way It Should Be

By Ryan Baxter - 2015-11-18

Have you been working on an important project and kicked your computer’s power cord or had the system freeze, and just because you hadn’t saved it to disk yet you lost that work forever? It’s one of the disadvantages of traditional computing architectures – DRAM loses its data when you lose power. It’s annoying as a consumer, but absolutely unacceptable to the enterprise (so much so that they find multiple workarounds to avoid it). But what if you could actually count on that speedy system memory to retain data, even when the power is lost? This new addition to the memory/storage hierarchy is something we’re calling persistent memory, and I’m excited to tell you that it’s becoming a reality. But first, a little background.

Today’s memory is comprised mostly of DRAM – which provides a great advantage when it comes to latency and speed.  However, the significance of this advantage is sometimes hard to comprehend when stated in terms of nanoseconds, microseconds and milliseconds.  To better illustrate just how fast DRAM is, I thought it would be interesting to draw an analogy to something we could all understand – like making a pizza.   Let’s take a look at how latency differs between DRAM, PCIe SSDs and 10K hard drives and how that translates to, say, finding and retrieving a tomato for making your pizza.   Using DRAM would be like walking across your kitchen to the fridge to get that tomato, taking all of about 6 seconds. Having that same data stored on a high-speed PCIe SSD, would be like driving to the produce stand at the edge of town in heavy traffic – taking you over 2 hours and putting the prospect of a reasonable dinner hour at risk.  If you had to get that same data from 10K HDDs, the latency delay is similar to growing and ripening your tomato from seed – an estimated 46 days – you would starve before your pizza was done.

So why not always keep your most critical data in DRAM, ready whenever you need it? Remember that power cord incident? DRAM’s volatility means that critical data can be lost in a power outage.  What’s more, the need to refresh DRAM (every 60 seconds or so) is becoming increasingly costly both in terms of energy usage and performance, a situation that is becoming more acute as customers’ density requirements grow.

Last week we announced a product that lets you have the best of both worlds. NVDIMMs – nonvolatile DIMM technology – represent what we believe is the first step towards our end goal of providing very fast, nonvolatile memory on the DRAM bus.  NVDIMMs provide a memory subsystem that delivers DRAM speed with the persistence of NAND (and an attached capacitor bank provides enough energy to move data safely from DRAM into NAND in the event of a power loss). At the same time, NVDIMM accelerates application performance by removing constricting I/O bottlenecks – persistent memory gives IT architects a whole new way to think about the memory/storage data-handling hierarchy. Our 8GB DDR4 NVDIMM is Micron’s first persistent memory solution, and it’s sampling today.

What is equally exciting for us is the industry momentum behind NVDIMM and persistent memory. As you can imagine, data must be handled differently to take full advantage of these features. Enablement activities for both hardware and software will allow IT architects to take advantage of the benefits that NVDIMM technology can bring to their system. Within the next year, we expect to see server platforms that provide persistent 12V power to dedicated NVDIMM slots, eliminating the need for capacitor banks attached to each DIMM.

Keeping your data safe no matter what, combined with the performance and latency of DRAM – that’s what we think the next generation of memory looks like. And it’s computing the way it should be.

Check out our NVDIMM products.

Ryan Baxter

Ryan Baxter