Memory

DDR5: The Next Step in System Level Performance: Part II

By Brian Drake - 2019-11-21

You may have caught my previous blog about DDR5 being the next step in system-level performance. That blog focused on four critical aspects of this upcoming DRAM architecture:

  1. Why is DDR5 critical from a performance perspective? New memory architectures are required to meet next-generation bandwidth-per-core requirements.
  2. What performance gains should we expect to see from DDR5? At introductory data rates of 4800MT/s, the potential increase in effective bandwidth jumps to 1.87x (times) that of DDR4.
  3. Does DDR5 address the ability to reliably scale? Yes, to continue to shrink process nodes and scale density to larger monolithic devices, architectural changes are required.
  4. When will DDR5 be available in the market? DDR5 will be available in the 2021 timeframe, if not before!

In this follow-up blog, I discuss, in a little more detail, some of the features on DDR5 that allow these aforementioned performance gains and scaling capability. However, improvements do not come in the form of new features only. In some cases, it is merely optimizations on DDR5, compared to previous generations of DRAM architecture, that drive significant improvements and benefits.

How does DDR5 improve performance compared to DDR4?

Increased data burst length to 16

A data burst length of 16 (BL16) is required on DDR5 to take full advantage of the increased data rates as core timing of the DRAM has not improved. BL16 improves data and command bus efficiency due to larger array accesses limiting the exposure to I/O-array timing constraints within the same bank. The burst length increase to 16 also enables the new DIMM architecture, which results in two completely independent 40-bit channels. This outcome improves concurrency and essentially doubles available memory channels in the system.

DDR5

Increased banks and banks groups

DDR5 doubles the number of bank groups while leaving the number of banks per bank group the same. Increasing banks groups is key as bank accesses to different bank groups require less time delay between accesses, compared to bank accesses within the same bank group. Banks within the same bank groups share local I/O routing, sense amplifiers, and array blocks, which introduce longer timing constraints. Doubling the total overall banks is also key as this improves overall system efficiency by allowing more pages to be open at any given time, thereby increasing the statistical probability of high page-hit ratios.

Improved refresh schemes

With DDR5 comes a new feature called SAME-BANK Refresh. This command allows for a refresh to one bank per bank group, leaving all others open to continue normal operation. If we take the features noted above and simulate a 64B random-access workload, we see substantial performance gains when compared to that of DDR4. In this scenario, we assume eight channels and one DIMM per channel (DPC). Even when comparing a single-rank DDR5 module to a DDR4 dual-rank at 3200MT/s, we see a 1.28x performance gain! This is an apples-to-apples comparison for data rate, but at introductory data rates of 4800MT/s, we see a gain of up to 1.87x!

DDR5

How does DDR5 improve reliability and scalability compared to DDR4?

Optimized DRAM core timings

Memory architectures continue to scale year over year to enable higher monolithic devices and more die per wafer. However, with this scaling comes smaller areas and feature size that have disadvantages that must be addressed. Some examples include but are not limited to DRAM cell capacitance continuing to drop, smaller access devices with Ion/Ioff implications, and longer bit lines for array efficiency. To address these items, DDR5 has optimized core timings such as tRCD, tWR, and tRP to allow for reliable scaling. These timings are critical to ensure adequate time to write, store, and sense charges in the DRAM cell.

On-die error correction code

In addition to optimized core timings, on-die error correction code (ECC) further improves data integrity as writing, storing, and sensing charges continues to become more challenging. On-die ECC reduces the system error correction burden by performing correction during READ commands prior to outputting the data from the DDR5 device. Furthermore, DDR5 introduces an error check and scrub feature where the DRAM device will read internal data and write back corrected data if an error occurred.

Want to know more about DDR5?

While DDR5 performance, scalability, and reliability improvements are not just limited to these features, they are some of the more significant ones so we should start seeing the benefits deployed in coming servers and data centers. These examples are just a subset that will help to meet the stringent requirements of next-generation systems and improve the total cost of ownership.

JEDEC continues to work hard to drive DDR5 specification to closure. The most recent workshop was held in early October and well attended by industry experts including Micron. Excitement continues to build around the possibilities that DDR5 offers for computing systems, and Micron is ready to engage with system architects to help them maximize this new product architecture.

Bookmark Micron.com for white papers (diving deeper into features and optimizations) and announcements regarding DDR5! The latest white paper for the DDR5 can be found here.

Brian Bradford

Brian Drake

Brian leverages 14 years of DRAM expertise to lead strategy development in the Data Center segment with a focus on enabling DDR5 solutions for hyperscale customers. Before moving to his current role within Micron Brian spent 6 years in Product Engineering where his time was split between leading and/or contributing to teams responsible for developing, enabling, and maintaining DRAM products. Four years prior to joining Micron, he held roles within Infineon and Qimonda as a DRAM test program engineer.
+