When Micron Ventures launched our AI Fund in 2019, our attention was on computer vision workloads —like imagery analysis in industrial settings and autonomous driving. Our second fund broadened our aperture to deep tech, meaning our focus remained on artificial intelligence, but we gave ourselves more leeway to explore nonlinear technology trends. Early in our Fund II, we saw glimmers of an emerging trend called foundation models or generative AI, and by November 2022 — Boom! — it was everywhere.
Our explorations gave us unique insights into the trend. So, starting in 2021, we invested in a series of startups to both leverage generative AI for customer use cases and to improve the underlying technology stack of large language models — a new, massive compute workload built on the foundation of high-performance compute and high-bandwidth memory. These investments focus on hardware, applications, enabling software and strategies to make everything run efficiently. Let’s take a look!
Here are some companies that are using generative AI in specific industry applications:
- Inworld: Leaders in “mixture of experts” strategies to scale large language models, Inworld is building the AI brain for nonplaying characters (NPCs) in gaming, improving experience and latency well beyond off-the-shelf generative AI. The company was recently included among the top 5 generative AI innovators of the year, along with Google DeepMind, NVIDIA and OpenAI at the VentureBeat Transform 2023 event.
- Multiscale: Materials research and development timelines are measured in years if not decades, and costs are exploding as we learn how to manufacture increasingly complex, critical semiconductor and clean energy materials efficiently and at high yields. Multiscale is an AI-based materials R&D platform that cleans and connects materials data silos to accurately define the “next best test” in R&D. Integrated within existing data science workflows, Multiscale is launching a new interface, powered by generative AI, that gives people who are not technologists an opportunity to rapidly explore new experimentation pathways with new materials.
- Pryon: This enterprise knowledge search platform uses secure, scalable large language models for private data. Deployed with customers since 2019, Pryon are well ahead in product and platform design atop generative AI capabilities for use on structured and unstructured enterprise datasets.
- Avicena: We’ve all heard the statistics on the massive power needed by modern data centers to run generative AI workloads. Avicena builds ultralow power photonic interconnects to attack this energy-scaling issue, replacing copper traces in component-to-component connects with microLEDs and dramatically reducing energy.
- Eliyan: Similarly tackling energy challenges of data communication, Eliyan goes further by eliminating silicon interposers to deliver high-performance connectivity in organic chip packaging. This approach offers the same performance as advanced packaging at a fraction of total cost of ownership, and it enables greater design flexibility. More HBM3E? Yes, please!
- Normal Computing: Anything but “normal”, born out of quantum computing, Normal has thought leaders building the foundation of “system 2” AI, layering their unique capabilities over existing enterprise language models to solve issues around trust, scalability, rationality, model hallucinations and drift in generative AI outputs.
- SambaNova Systems: With a compute system built from the ground up and optimized for large transformer models, SambaNova is the leading contender to bring new accelerator, platform and server architectures to large language model workloads at a global data center scale.
These are the teams laying the foundations for how generative AI will change our world. And as a global leader in memory and storage, Micron is empowering these innovators to bring their technologies to reality.
Reach out if you’d like to know more and see other companies in Micron Ventures portfolio.