Last night in San Francisco, NVIDIA launched the new GeForce 1080Ti—an update to their powerful Pascal-based 1080 graphics card that has dominated performance benchmarks since it was launched last spring. It was just over a year ago when I announced the arrival of G5X on this blog, highlighting how this new memory would help fuel a new generation of bandwidth-hungry graphics cards. This announcement is another step down that path.
The 1080Ti, NVIDIA’s latest gaming marvel, is powered by Micron’s next-generation quad data rate G5X. Our specialized Graphics memory design team in Munich has been working incredibly hard, in lock-step with NVIDIA’s design team, to enable 10 percent more bandwidth from the memory. The result is a new G5X on the 1080Ti that operates at 11Gb/s/pin. Designing a memory that operates at these high speeds is no easy endeavor. Even more difficult is optimizing the system across the GPU, channel/PCB and memory at these speeds. Simply put, we had to ensure the memory “talks” to the GPU system with minimal noise, jitter and PVT loss. The launch of Micron’s next-generation G5X—now the fastest discrete memory technology in the world with a whopping 44GB/s per component—shows what’s possible when you bring together some of the brightest minds in the industry.
|Original G5X @ 11Gb/s||Next-Gen G5X @ 11Gb/s|
The chart below shows the data rate performance of discrete memory technologies available today. You can see how Micron’s G5X offers significant performance gains and why NVIDIA chose to use it for their latest flagship gaming card:
|Bandwidth by Memory Technology|
|Data rate (Gb/sec/pin)||2.1||3.2||4.2||8||11|
|Bandwidth per component (GB/sec)||4||6||17||32||44|
The 1080Ti doesn’t just utilize an increase in memory data rates, though. NVIDIA also increased the size of the frame buffer to 11GB by moving to a 352-bit wide bus, opening the throttle on the 1080Ti and delivering more than 480GB/s of total memory bandwidth. The result is an incredibly crisp, lag-free, and immersive visual experience for gamers and VR enthusiasts.
So why is frame buffer becoming increasingly important in next-generation, high performance graphics cards? New games are demanding it. The richer textures, higher resolutions, and more realistic effects demand a much bigger and faster render buffer. NVIDIA presented a timeline of game requirements that make this point:
Today’s announcement illustrates our commitment to developing graphics memory solutions that truly free our customers to get the most from their GPU designs, delivering amazing end systems. We’re big believers in the performance, simplicity, and efficiency that our G5X memory provides. In fact, we see compelling uses for G5X (as well as future graphics memories) in other market segments such as networking and supercomputing.
We’d like to congratulate our partners on another truly impressive graphics system. It’s been an unbelievable year to work in graphics memory, and I’m just as excited for what’s ahead.