Add Bookmark(s)



Bookmark(s) shared successfully!

Please provide at least one email address.

Hybrid Memory Cube FAQs

Hybrid Memory Cube(2)
Short-Reach HMC(13)
What problem does HMC solve?

Over time, memory bandwidth has become a severe bottleneck to optimal system performance. Conventional memory technologies are not scaling with Moore’s Law; therefore, they are not keeping pace with the increasing performance demands of the latest microprocessor and application-specific integrated circuit (ASIC) roadmaps. Microprocessor and ASIC enablers are doubling cores and threads per core to greatly increase performance and workload capabilities. They are doing this by distributing work sets into smaller blocks among an increasing number of work elements (cores). Multiple compute elements per processor require an increasing amount of memory accesses per element. The term “memory wall” has been used to describe this dilemma. With performance levels that break through the memory wall, HMC is a revolutionary technology that enables greater performance for next-generation computing and high-speed networking systems.

Why are current DRAM technologies unable to fully solve this problem?

Current memory technology roadmaps do not provide sufficient performance to optimally meet the CPU, GPU, and ASIC memory bandwidth requirements. By advancing past the traditional DRAM architecture, HMC is establishing a new standard of memory to match the advancements of CPU, GPU and ASIC roadmaps. HMC offers system designers optimum flexibility in developing next-generation system architecture.

What makes HMC so different?

With performance levels that break through the memory wall, HMC unlocks a myriad of  system performance advancements for the next generation of high-performance computing and advances network capabilities to support 100Gb and 400Gb system development.

HMC represents a fundamental change in memory construction and connectivity. Utilizing advanced 3D interconnect technology, HMC blends the best of logic and DRAM processes into a heterogeneous package. The foundation of HMC is a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon-via (TSV) bonds. An energy-optimized DRAM array provides efficient access to memory bits via the logic layer, creating an intelligent memory device that’s truly optimized for performance and energy efficiencies. This elemental change in how memory is built into a system is paramount. By placing intelligent memory on the same substrate as the logic, each part of the system can function as it’s designed more efficiently than with previous technologies.

What are the measurable benefits of HMC?

HMC is a revolutionary innovation in DRAM memory architecture that delivers memory performance, power, reliability, and cost like never before. This major technology leap breaks through the memory wall, unlocking previously unthinkable processing power and ushering in a new generation of computing.

  • Increased Bandwidth − A single HMC unit can provide up to 15 times the bandwidth of a DDR3-1333 module.
  • Reduced Latency – With vastly more responders built into HMC, we expect lower queue delays and higher bank availability, which will provide a substantial system latency reduction—a key advantage in networking system design.
  • Power Efficiency − HMC’s revolutionary architecture enables greater power efficiency and energy savings, utilizing up to 70% less energy per bit than DDR3-1333 DRAM technologies.
  • Smaller Physical Footprint − HMC’s stacked architecture uses nearly 90% less physical space than today’s RDIMMs.
  • Pliable to Multiple Platforms − Logic layer flexibility enables HMC to be tailored to multiple platforms and applications.
  • Ultra Reliability HMC delivers greater resilience and field reparability with a new paradigm of system-level, advanced reliability, availability, and serviceability (RAS) features that include embedded error-checking and correction capabilities.
  • Abstracted Memory − Designers can leverage HMC’s revolutionary features and performance without having to interface with complex memory parameters. HMC manages error correction, resiliency, refresh, and other parameters exacerbated by memory process variation.
What does the implementation of HMC look like?

HMC is tightly coupled with CPUs, GPUs, and ASICS in direct point-to-point configurations where HMC performance is essential to system performance. The result is low pin counts with easy board routing in straightforward designs. In systems that require higher density, HMC supports chaining and half-width link configurations to keep the host pin counts down and the designs simple.

What are challenges of HMC implementation?

As with any leading technology, some of the “copy and paste” aspects of using older designs are lost.  However, with Micron’s support documents and a fast-growing ecosystem, you’ll be up-to-speed in no time.

What industries/segments do you anticipate will be affected the most?

Any applications where high performance and energy efficiency are critical will be dramatically affected by this technology. For example, the challenge for network systems to maintain line speed performance provides an excellent opportunity for HMC. System developers recognize that a memory bottleneck exists for system development beyond 100Gb and are actively looking for high-performance memory applications for data packet processing and data packet buffering or storage.

The high-performance computing segment is also hitting the memory wall. While processor roadmaps attempt to keep pace through core and thread doubling, core and thread count has not been matched with adequate memory performance. The second major challenge for high-performance computing is energy consumption. Higher-performance processing and exponential bit growth requirements are pushing data centers beyond practical limits for managing power and total cost of ownership. A more energy-efficient solution is desperately needed.
What is the HMCC and what are its goals?

The Hybrid Memory Cube Consortium (HMCC) is a working group made up of industry leaders who build, design in, or enable HMC technology. The goal of the HMCC is to define industry-adoptable HMC interfaces and to facilitate the integration of HMC into a wide variety of applications that enable developers, manufacturers, and enablers to leverage this revolutionary technology.

What does the HMCC specification cover?

The specification includes two PHY definitions and a common protocol. The short-reach (SR) PHY is designed for applications needing channel lengths up to 8 inches, and the ultra short-reach (USR) PHY is intended for applications requiring very short and power-efficient channels with lengths from 1 to 2 inches.

Where can the HMCC specification be accessed?

The HMCC specification is publically available on hybridmemorycube.org.

What Micron parts are available?

Our 2GB HMC device composed of a stack of four 4Gb DRAM die is available. HMC is designed using the HMCC’s short-reach (SR) PHY definition and is available in a 31mm x 31mm package offering four links with full 160 GB/s bandwidth.

When is Micron planning for HMC volume production?


How do I access a data sheet or technical information for Micron’s HMC parts?

HMC technical documents—including the datasheet—are only available under a non-disclosure agreement (NDA).  Please work with your sales representative for access.

Knights Landing(8)
What is Micron announcing regarding Intel’s Knights Landing next-generation CPU architecture?

The high-performance, on-package memory found in Knights Landing leverages the fundamental DRAM and stacking technologies also found in Micron’s HMC products.

Is this high-performance, on-package memory the same as HMC?

While leveraging the same fundamental technology benefits of HMC, this high-performance on-package memory has been optimized for integration into Knights Landing platforms.

How have Intel and Micron collaborated to bring this solution to fruition?

Micron and Intel have been collaborating on methods to break down the memory wall for years. The teams demonstrated early success at IDF 2011 where Micron’s HMC Gen1 device and an Intel memory interface targeted at many-core CPUs provided a sneak peek at the future of memory.

Are there plans to use this high-performance, on-package memory on other (future) Intel platforms?

Both Micron and Intel believe that high-performance, on-package memory will play a significant role in multi-core CPU architectures now and in the future.

Will this high-performance, on-package memory be available to other customers?

No, this memory solution has been developed specifically for Intel’s Knights Landing.

Will Intel standardize high-performance, on-package memory?

This memory solution was developed with the intent of being integrated into the Knights Landing platform; there is no plan for standardization at this time.

What is the value that high-performance, on-package memory brings to Knights Landing?

Just like HMC, high-performance, on-package memory provides unprecedented levels of memory bandwidth with a fraction of the energy and footprint of existing memory technologies along with the RAS capabilities required by HPC systems.

How does high-performance, on-package memory differ from what is being developed within the HMC Consortium?

The HMC Consortium (HMCC) is devoted to developing and driving open-standard interface and protocol platforms.