We live in exciting times for technology users. AI has become a market force, transforming every sector of the economy. AI and machine learning (ML) are finally being embedded and offered in commercial applications, delivering mainstream analytics/AI as a pragmatic enabler for commercial business. Yet experts believe we’re only in the early days of AI. And, while there’s an immense amount of “buzz” about admittedly exciting AI/ML, practical advice for real, relatable results is harder to find.
So, like that uber-smart kid on the front row, we’re raising our hand. “Here, here! Call on us!”
Offering practical AI advice was the motivation for the latest white paper from Micron’s storage team, which designs revolutionary 3D NAND flash storage technology for data centers and cloud management. Download “AI and Machine Learning Demand Fast, Flexible Infrastructure,” and see how three key innovations made mainstream AI a possibility, and why the right storage and memory are foundational for faster and more accurate AI/ML training and inference.
Why the focus on AI storage and memory? My Micron colleague Wes Vaske explores infrastructure for AI/ML systems. In his blog on “Getting to the Heart of Data Intelligence with Memory and Storage,” he discussed the lack of performance data discussing the underlying storage or memory. The conversations were instead dominated by the different compute resources available (GPUs, CPUs, FPGAs, TPUs, etc.). But this is changing, connected to that ‘early days’ situation. Wes says, “the future is going to rely on our ability to architect storage systems that can manage the requirements of the next-generation GPUs.”
Ingest, Transform, Train, Execute
Micron memory and storage have been heroes in AI’s transformation to highly adaptable, self-training, ubiquitous, machine-learning systems for mainstream use. Another colleague, Tony Ansley, Micron senior technical marketing engineer, posted a three-part series of blogs, culminating with Artificial Intelligence and Machine Learning Demand High-Performance Storage. Tony built his infrastructure discussion and advice around the four phases of AI/ML workflow. Ingest, Transform, Train, and Execute.
Said Tony, “I wanted to highlight what we think are the key hurdles to overcome and how Micron can help as an organization starts its journey into the AI/ML world as a commercial enterprise.” From Tony’s blogs and other Micron content, such as our AI infographic, the new white paper details how flash memory and storage let you get more data closer to your processing engines for faster analytics.
Faster and with More Parallel Processes
GPUs, a key enabler for faster processing, can handle millions of operations in parallel while CPUs use sequential order. Together, these Micron products provide the broad spectrum of high-performance components that are critical for advanced AI/ML and even deep learning solutions now being deployed more broadly. The faster your solution can obtain usable training data sets for your AI engine, the faster you will be able to deploy and benefit from this new technology to build smarter edge functionality.
Powerful, Small Memory Devices for the Network Edge
High-performance, high-capacity memory and multichip packages power AI training and inference engines, whether in the cloud or embedded in mobile and edge devices. Innovative memory technologies for AI systems are a focus for Micron. The white paper provides greater detail about how to serve up huge volumes of data in real time to train and accelerate inference, enable edge devices with memory and storage to keep them smart, fast and efficient.
Time to Focus on Memory and Storage for AI/ML
Micron commissioned a study by Forrester Consulting that indicates that, for most organizations, AI architecture is getting the spotlight. When asked: Is upgrading or rearchitecting memory and storage critical to meet future AI/ML training goals? Yes, almost 80% of the time, said the 200 IT and business professionals who manage architecture or strategy for complex data sets at large enterprises in the U.S. and China. In addition, respondents indicated that moving memory and compute closer together is essential for AI/ML success (per 90% of the firms).
We invite you to download your copy of “AI and Machine Learning Demand Fast, Flexible Infrastructure.” Learn more at micron.com/AI about how Micron products can help you be successful in your next AI/ML project. Stay up to date with Micron by following us on Twitter @MicronStorage and connecting with us on LinkedIn.