DESIGN TOOLS

Invalid input. Special characters are not supported.

AI

Building AI, region by region: Why memory and storage define the next decade 

Viral Gosalia | October 2025

When I examine the most impactful periods of my career, those centered on innovation and developing new businesses, they all share a core characteristic: seizing a secular wave that is fundamentally reshaping industry. That entrepreneurial drive and vision for growth are why I am passionate about Micron, a company built on invention, global technology leadership and local community impact. Today, the most significant transformation is the age of accelerated intelligence, and Micron is the catalyst that makes it practical, efficient and locally accessible worldwide.

1. How are AI’s secular trends reshaping the global landscape?

AI compute demand is doubling every six months, driving a global race to localize intelligence—where nations and industries are building data centers at unprecedented speed to deliver AI everywhere, with sovereignty and scale.

Artificial intelligence is driving a secular trend that is transforming every aspect of modern life, moving from centralized labs to widespread deployment in areas such as finance, medicine, engineering, and manufacturing. This requires AI to scale from training to inference at scale.

Two key demands define this global transition:

  • Inference everywhere: AI workloads must be delivered closer to where data is generated — whether in the cloud, on premises data centers, or edge devices.
  • Sovereignty of data and AI: Nations and regions are prioritizing data and AI to anchor trust, resilience and economic growth, accelerating large-scale regional data center build-outs globally.

You can see this momentum in major regional deployments:

  • United States: Large-scale multi-site hyperscale programs, such as the Stargate initiative, pair training hubs with regional inference zones, requiring maximum density and power management.
  • Europe: The focus is on achieving sovereign capacity through green energy integration, exemplified by landmark sites like Stargate Norway and GPU clusters supporting homegrown AI companies, such as Mistral.
  • The Middle East: Countries such as the UAE and Saudi Arabia are rapidly scaling their capacity through national AI programs for smart-city logistics and healthcare, requiring infrastructure that respects data residency while optimizing throughput per rack.
  • Asia Pacific (APAC): Areas across Asia Pacific, such as China, India, Japan and Taiwan , are rapidly building their AI infrastructure to train and infer data, including in local languages, that enables various applications to enrich the lives of billions.

2. How is Micron unlocking AI’s next wave of scale and performance?

AI models require accessing and processing massive sets of data in the fastest way possible. Because memory and storage are where data resides, the speed and efficiency of data flow are the critical bottlenecks that starve expensive GPU clusters and CPUs of data. Micron’s technology and product leadership provide solutions to the memory wall problem with performance, capacity and energy efficiency.

This AI-led transformation relies on Micron’s specialized memory and storage portfolio:

Performance and latency: Keeping compute fed

Memory and storage are critical to keeping data flowing to processors (GPUs, CPUs, XPUs).
 

Product/TechnologyValue proposition for AI (speed, latency)
HBM (HBM3E / HBM4 Roadmap)HBM is the choice for GPU-based high-performance computing (HPC) and AI/server solutions. Micron’s HBM3E provides industry-leading bandwidth of greater than 1.2 TB/s at a pin speed of greater than 9.6 Gb/s. Micron is sampling HBM4 to key customers, which delivers performance of greater than 2TB/s, to enable seamless integration into next-generation AI platforms.
GDDR7A GDDR7 system running at 32 Gb/s per pin can achieve over 1.5 TB/s of system bandwidth, representing a 60% increase over a GDDR6 system. For generative AI workloads like text-to-image creation, Micron’s GDDR7 reduces response times by up to 20% (compared to GDDR6), delivering faster and more enhanced user experiences.
MRDIMMMultiplexed-rank DIMMs offer the highest bandwidth and lowest latency in main memory solutions. MRDIMM delivers up to a 39% improvement in bandwidth compared to DDR5 RDIMM data rates and up to 40% lower latency for capacity- and bandwidth-sensitive workloads, such as RAG. 
SOCAMMMicron’s technological innovation, in collaboration with NVIDIA, led to the adoption of LPDRAM in data center platforms, providing unparalleled performance per watt. Micron’s LP on GH200 with NVLink, when compared to a DDR5 system (on an x86 system with a PCIe-connected Hopper GPU), delivered five times better inference throughput and around 80% better latency!!
Micron 9650 PCIe Gen6 SSDThe world’s first PCIe® Gen6 data center SSD. It delivers up to two times the performance of PCIe Gen5 drives with sequential read speeds up to 28 GB/s and 5.5 MIOPS random read performance. This maximizes speed to feed hungry GPUs, minimizing idle cycles.
Micron 7600 SSDDelivers best-in-class low latency and superior quality of service (QoS) for AI and demanding data center workloads. It provides rapid access across a broad range of mainstream workloads. 


Technological innovation: Enabling memory capacity to scale

  • The 1γ (1-gamma) DRAM technology node increases bit density per wafer by more than 30% compared to the previous 1β (1-beta) node. This enables Micron to efficiently supply the market's needs as data-centric workloads, such as AI, continue to grow.
  • Micron G9 NAND (9th-generation 3D NAND) provides the foundation for new storage innovation, enabling the best industry storage density. The Micron 6600 ION NVMe SSD, built with Micron G9 QLC NAND, delivers industry-leading capacity up to 245TB in E3.L form factors, revolutionizing AI data lakes and hyperscale storage by enabling over 3.9PB per 1U of rack space.
  • Micron's leadership in advanced packaging led to the 12-high HBM3E, delivering a massive 50% capacity increase to 36GB cubes in the same physical footprint as previous 8-high stacks. This allows customers to train and run larger AI models without the need for CPU offload, delivering faster time to insights.
  • CXL-attached memory (like the Micron CZ122 module) helps overcome the "memory wall" problem by enabling server OEMs to scale memory capacity and bandwidth for data-intensive applications.
  • Micron’s monolithic 32Gb DDR5 die-based high-capacity DIMMs enable 128GB RDIMM capacity with 16% lower latency compared to 3DS RDIMMs, supporting improved throughput and capacity for AI and general-purpose workloads.

Sustainability & low energy consumption

Achieving accelerated intelligence in a sustainable way is vital. Micron embeds energy efficiency directly into its technology and products:

  • 1-gamma DDR5 is capable of faster speeds (9200 MT/s) while simultaneously reducing power consumption by up to 20% compared to 1-beta DDR5.
  • HBM3E offers superior power efficiency, consuming 30% less power than competitors in the market.
  • LP-SOCAMM (LPDDR5X-based modular form factor) is designed as a flagship memory solution for AI data centers, enhancing power efficiency and achieving significant performance gains. For multichase and POT3D workloads, LPDDR5X memory consumes up to 77% less power compared to DDR5 memory.
  • SSD solutions like the Micron 9650 and Micron 6600 ION are designed for exceptional energy efficiency and lower carbon emissions, helping reduce operational expenditures (opex). The Micron 9650 even offers a liquid cooling option for enhanced thermal efficiency.

Higher memory performance, lower latency, and greater capacity enable customers to train larger AI models faster and deliver more responsive user experiences across industries. At the same time, Micron’s energy-efficient solutions help organizations reduce operational costs and minimize their environmental footprint. These advances are transforming not just business outcomes, but the way people live and interact with AI.

3. How does Micron’s global collaboration accelerate AI innovation?

Micron acts as a strategic enabler through its collaboration with ecosystem partners, hyperscalers, OEMs and emerging neo cloud players globally:

  • Enablers: Micron works closely with partners such as Intel (e.g., on MRDIMM for Xeon 6 processors) and AMD (e.g., HBM3E integration into Instinct GPUs and DDR5 collaboration), as well as NVIDIA (Micron’s SOCAMM was developed in collaboration with NVIDIA to support the NVIDIA GB300 Grace™ Blackwell Ultra Superchip. The Micron HBM3E 12H 36GB is also designed into the NVIDIA HGX™ B300 NVL16 and GB300 NVL72 platforms.
  • Hyperscalers, OEMs and neo cloud: Micron partners with top hyperscalers, OEMs and the broader ecosystem to enable the global AI revolution. Micron also partners with AI cloud service providers, such as CoreWeave, to help customers scale out to high data throughput using solutions like the low-latency Micron 7600 SSD.
  • Local support: Micron commits to global manufacturing and regional enablement. Our strategy is to support customers, from large hyperscalers to local operators, by providing regional customer development teams that solve problems in real time, ensuring systems are optimized for local conditions and economics.

Entrepreneurship with purpose

My passion lies in taking inflection points and transforming them into inclusive progress. The AI-led transformation is the most significant opportunity of this decade, and it requires the foundational memory and storage technologies that Micron delivers.

Micron provides the necessary performance, capacity and energy efficiency to enable every community and innovator worldwide to harness the potential of AI. As global AI deployments surge — from massive multi-site programs in the U.S. to sovereign clusters emerging in Europe and high-density build-outs across the Middle East and APAC markets — Micron is your trusted partner for memory and storage innovation. Accelerated intelligence will be powered by Micron’s memory and storage, transforming lives, region by region.

Head of EMEAIJ Data Center Business

Viral Gosalia

Viral Gosalia is the Head of EMEA, India, and Japan Data Center Business at Micron Technology. Viral leads strategy, growth, and customer partnerships for advanced memory and storage solutions powering AI and cloud infrastructure across these regions.

With over 15 years of experience spanning product management, business development, and engineering leadership at leading technology companies, Viral brings deep expertise in enabling next-generation compute platforms. Viral holds a Bachelor’s degree in Electronics and Telecommunication from Mumbai University and a Master’s degree in Electrical Engineering from San Jose State University.