When we launched G5X just over a year ago, we were proud to deliver the world’s fastest discrete memory for NVIDIA’s highest performance gaming and workstation-class graphics cards. To keep up with the insatiable demands on memory from high performance GPUs for gaming, visualization and artificial intelligence, we continue to push the envelope for graphics memory data rates and yields. We’d to like to show where Micron presently stands in delivering next-generation memory bandwidth up to 16Gbps (Gigabit per pin/second).
For today’s update, I’ll answer a handful of questions around next generation memory we’ve received from our blog readers, analysts, and gamers.
What is driving growth in the graphics/GPU market? What impact does this have for memory?
There are a number of exciting trends across the graphics market segments driving an increase in demand for high performance GPUs. At the highest level: content is becoming richer and more demanding, resolutions are increasing, VR/AR continues to gain momentum, and GPUs are being used in new ways for machine learning and autonomous driving applications. This is part of a broader trend we see across the industry—our customers are trying to solve new and complex problems, and more often than not, the solution comes from finding new ways to leverage the power and capabilities of memory technologies.
To meet the demands of these high performance GPU systems, memory bandwidth and density requirements have been increasing at an incredibly fast pace. Just a couple of years ago, a graphics card was considered high performance with 4GB of memory frame buffer and data rates of 6-7Gbps. Fast forward to today, frame buffer size has essentially doubled and with the introduction of G5X, memory data rates up to 12Gbps. And in some cases the enthusiasts, folks who stop at nothing to have the best gaming machines they can get their hands on, are using multiple GPUs, dramatically increasing memory content per system. Going forward, we expect this trend to continue, as highlighted in the graphic below.
Can you provide an update on GDDR5X, particularly where you are with higher speeds?
We launched G5X just over a year ago, initially with a 10 Gbps speed sort. Since then, our teams have been focused on increasing data rates and yields. Mass production speed sorts for our G5X now include 10, 11 and 12 Gbps. We are incredibly proud that Micron’s G5X is the memory that fuels Nvidia’s highest performance gaming and workstation-class graphics cards. Most recently, Nvidia launched the Titan Xp utilizing Micron’s next-gen G5X at 11.4 Gbps, which is now in mass production.
When we talk about memory speeds, there are generally two conditions we reference: mass production and the engineering environment. While the former is important to the here and now, the latter is equally material, as it tells us where we are in terms of headroom on the memory core and what capability we have to increase speeds going forward.
To that end, I am excited to announce that our Graphics design team in Munich has achieved 16Gbps data rates in our high speed test environment—another first for memory industry. The left picture shows the data eye opening at 16Gbps based on a critical PRBS pattern sequence, with great timing and voltage margin. The right image below shows stable data timing margin (horizontally) versus data rate (vertically), from our base sort speed of 10Gbps up to an unprecedented 16Gbps. This result is based on measurements on a meaningful sampling size of our mass production G5X silicon – not theoretical simulation data.
Micron G5X PRBS11 Read
Data Eye at 16Gbps
Micron G5X Frequency vs Read
Strobe Shmoo 10-16Gbps
We strongly believe that our expertise and experience running ultra-high data rates on G5X is going to be a big advantage for driving performance in GDDR6 (which brings me to the next question).
There has been a lot of talk about GDDR6 in the news recently. How is this different from G5X and what are Micron’s plans?
GDDR6 will continue down the successful path of G5X high speed signaling based on conventional DRAM packaging. Some differences do exist between G5X and G6, the most notable of which are:
- The introduction of an FBGA180 ball package with increased pitch
- A dual channel architecture
This table provides a comparison between the two memories:
With regards to the status of Micron’s G6 program, which we first announced in February, I am pleased to report that our product development efforts are on-track and we expect to have functional silicon very soon. By leveraging our G5X based high speed signaling experience from roughly 2 years of design, mass production, test and application knowledge, I am confident we are well positioned to bring the industry’s most robust G6 to mass production by early 2018.
What Graphics products do you use and why?
I should start by saying that I’m a big believer in using systems that utilize our Graphics memory, and I encourage my team to do the same. Being a user of the products our customers create allows us to better understand their capabilities and drives a level of passion and appreciation for the products and markets we serve.
At home I’ve been running Nvidia’s GTX1080 Founders Edition for just under 1 year. This was the first graphics card to use Micron’s G5X, and my first high end gaming PC. The card is connected to a 4K TV and a high end virtual reality system… the gaming and VR experience is amazing. And on the console side I have the PlayStation 4 with VR. Beyond gaming, I enjoy capturing and editing video, and for that I just upgraded to the Titan Xp… it renders 4K video without breaking a sweat.
At the office we have an incredible gaming notebook- the ASUS ROG GX800.
I wanted something portable so the team wasn’t bound to any one location, but didn’t want to sacrifice on performance. This machine is equipped with two Nvidia GTX1080s in SLI using 16GB of Micron’s G5X, and the result is an amazing gaming and virtual reality experience. It has plenty of headroom for overclocking too, thanks to a liquid cooled docking station.