Over time, memory bandwidth became a severe bottleneck to optimal system performance. Conventional memory technologies were struggling to keep pace with the increasing performance demands of the latest microprocessor and application-specific integrated circuit (ASIC) roadmaps. Microprocessor and ASIC enablers are doubling cores and threads per core to greatly increase performance and workload capabilities. They are doing this by distributing work sets into smaller blocks among an increasing number of work elements (cores). Multiple compute elements per processor require an increasing amount of memory accesses per element. The term “memory wall” has been used to describe this dilemma. With bandwidth performance levels that break through the memory wall, HMC enables greater performance for next-generation computing and high-speed networking systems.
HMC is tightly coupled with CPUs, GPUs, and ASICS in direct point-to-point configurations where HMC performance is essential to system performance. The result is low pin counts with easy board routing in straightforward designs. In systems that require higher density, HMC supports chaining and half-width link configurations to keep the host pin counts down and the designs simple.
Any applications where high performance and energy efficiency are critical will be dramatically affected by this technology. For example, the challenge for network systems to maintain line speed performance provides an excellent opportunity for HMC. System developers recognize that a memory bottleneck exists for system development beyond 100Gb and are actively looking for high-performance memory applications for data packet processing and data packet buffering or storage.
The high-performance computing segment is also hitting the memory wall. While processor roadmaps attempt to keep pace through core and thread doubling, core and thread count has not been matched with adequate memory performance. The second major challenge for high-performance computing is energy consumption. Higher-performance processing and exponential bit growth requirements are pushing data centers beyond practical limits for managing power and total cost of ownership. A more energy-efficient solution is desperately needed.
The Hybrid Memory Cube Consortium (HMCC) is a working group made up of industry leaders who build, design in, or enable HMC technology. The goal of the HMCC is to define industry-adoptable HMC interfaces and to facilitate the integration of HMC into a wide variety of applications that enable developers, manufacturers, and enablers to leverage this revolutionary technology.
The specification includes two PHY definitions and a common protocol. The short-reach (SR) PHY is designed for applications needing channel lengths up to 8 inches, and the ultra short-reach (USR) PHY is intended for applications requiring very short and power-efficient channels with lengths from 1 to 2 inches.
The HMCC specification is publically available on hybridmemorycube.org.
Our 2GB HMC device composed of a stack of four 4Gb DRAM die is available. HMC is designed using the HMCC’s short-reach (SR) PHY definition and is available in a 31mm x 31mm package offering four links with full 160 GB/s bandwidth.