DDR4, DDR5; GDDR5, GDDR6; LPDDR4, LPDDR5; HBM, HBM2... What do these increasing numbers mean? Why do they keep changing? Obviously, higher is better, but what is driving this and what does better mean? The simple answer to these questions is performance. Application requirements are continuously increasing. In each new generation, processors are running faster and hotter. To keep up, memory bandwidth needs to continue to increase and the clear expectation is that memory speeds will need to continue to increase in order to feed the processing beast.
Of all the processing units (CPU, GPU, APU, etc.), the graphics processing unit (GPU) is one of the hungriest beasts around. With their ever-increasing core counts, high performance GPUs present a clear and present challenge for memory to provide enough data, via memory bandwidth, to allow these powerful compute engines to do their thing in all areas of our modern world. GPUs are present from the edge, edge servers, consumer gaming, mobile data centers (automobiles) to the acceleration found in the cloud. Each of these market segments have different trends that drive requirements for the applications, meaning high performance memory solutions are not one size fits all. Each segment has unique care about that directly influence the high-performance memory architecture used to support these trends.
Micron is focused on collaboration with industry thought leaders to provide high value solutions to the market. These enablers are driving innovation across broad scope of applications using their GPUs, including data center, automotive, consumer graphics and AI accelerators. Let’s explore some of the mega trends in these segments and how they influence the high-performance memory architectures used to support their solutions.
Datacenter are everywhere, extremely important and experiencing strong growth fueled by Artificial Intelligence training and inference demands. AI training is driving HBM, currently, the de-facto memory for AI training due to superior bandwidth and power efficiency. AI inference, with its need for faster interfaces, is driving HBM, GDDR, and next generation DDR. Cloud gaming will drive significant GDDR content in the datacenter, utilizing graphics rendering in the cloud instead of the console or PC. 5G will drive the need for more media transcoding and network acceleration using LPDDR, GDDR, possibly HBM. Autonomous driving generates millions of hours of video and sensor data from vehicles. This enormous amount of data will need to be analyzed and used for training. Internet of Things (IOT) relies on sensor data that often needs to be aggregated and analyzed in the datacenter.
Every day seems like you always hear about autonomous vehicles. As you can guess, these depend upon high performance processors and very high-performance memory solutions. Level 5 autonomous vehicles (a vehicle that completely operates independently without a driver) requires 100’s of Teraflops of compute / AI performance, however, the actual number still TBD as non-constrained L5 has not been realized / implemented. Fully connected Convolutional Neural Networks require extensive memory accesses to keep Deep Learning Engines fed with data.
Gaming is a large and growing piece of our entertainment portfolio. We all do it. In our family room on the game console. On your high-end gaming PC. And especially on our mobile phones. Gaming has become so popular that watching people play video games is now considered to be a sporting major event. With this explosion, expectations have dramatically increased. High resolution graphics (4K à 8K) is normal. Virtual reality like images is expected. Real time interaction is mandatory. All these requirements drive memory performance (think, bandwidth) higher and higher. GDDR memories are best equipped to provide this.
Artificial Intelligence Acceleration
AI training drives requires high memory bandwidth and power efficiency. There is only memory solution that can support these rigorous requirements. High Bandwidth Memory (HBM, HBM2, HBM2E) sits at the top of the memory bandwidth pyramid. HBM provides is constructed with multiple layers of DRAM utilizing Thru Silicon Via (TSV) technology. These unique memory solutions provide
AI inference, while similarly demanding, utilizes GDDR and DDR memories to satisfy their requirements. Currently, this is done with GDDR6 and DDR4, but as expected, the next improved generation is just around the corner. Watch a speech recognition demonstration for a comparison of Accelerating performance with GDDR6.
Micron has a very unique perspective as we develop memory and storage products that are used in every application and market. Micron can see the demand for high performance memories, specifically supporting consumer and automotive requirements, will continue to grow. Micron is driving the market with constantly evolving memory solutions to support the demanding requirements of these ever-growing market applications changing our world. To learn more please visit “micron.com/graphics”. If learning more about “high performance” memory is your interest you may want to check out the recently published white paper on “High Performance Memory”.