Today we wear more computing power and have more communication capability on our wrists than could have been imagined even a half century ago. Watches not only tell us the time, but remind us of appointments, measure our heart rate, count the number of flights of stairs we climb, measure our sleep, record how long and where we workout, and respond to text messages. We are living in the aptly named “Information Age.”
What do we do with all of this information? Does it serve a purpose? What is its value? Though it is now very easy to generate and aggregate information—even from millions of sources—gaining knowledge or learning from this information is not always easy. Data scientists are feverishly working at extracting useful knowledge from billions of data points. When we talk about this we often call it “big data” and the process of gleaning gems of knowledge from it can be like looking for a needle in a haystack.
Fundamentally, the process of scouring vast quantities of information is computationally intensive, and a whole lot of electrons are inconvenienced in this process! Not long ago, pieces of information were moved from a hard disk drive (HDD) to DRAM where it would be sucked into a static RAM cache next to the CPU. Programs manipulated this information in the CPU and the result was ultimately stored again in the HDD. We realized that the round-trip time, from HDD to CPU and back to HDD, was taking too long.
To reduce these round-trip times we began to look for faster ways to move and process data. We made faster storage devices (solid state drives [SSDs]), faster interfaces (PCIe), faster protocols (NVMe), faster memory (DDR4), faster processors (how many cores now?!), and faster programs. And then, because our data kept getting bigger, we also increased the capacity of our volatile (DRAM) and nonvolatile (NAND flash) memories, ultimately building data centers and data warehouses filled with racks of servers and storage.
But what if we could change the way we store and process all of this information? What if we could look for the “needles” in the data as it is captured using the pattern recognition capabilities of Micron’s Automata Processor? Or what if we could reduce the computing round-trip time by making DRAM appear to be nonvolatile using a nonvolatile inline memory module (NVDIMM)? Or what if we could just make a faster, higher-density nonvolatile memory—sometimes called a storage class memory (SCM) —that has performance characteristics that are close enough to DRAM to significantly reduce the computing round-trip time? That’s what we announced in July with 3D XPoint™ technology.
These are programs that I am proud to say that Micron is working on. And on our journey we’ve learned a lot. To see some of the things that we have learned, check out my recent SNIA webcast on how emerging memories will impact our storage and computing architectures.
Michael Abraham is Business Line Manager in the Storage Business Unit at Micron Technology and is responsible for emerging memories, including 3D XPoint technology. Over the past 10 years he has worked as an emerging memory and NAND flash architect and led Micron’s NAND Flash applications engineering team. Michael holds multiple memory technology patents and is a senior member of IEEE. You can follow Michael on Twitter @AbrahamMichaelM.
You can also follow us on Twitter @MicronStorage where we share insights and news related to the data storage industry.