Artificial intelligence (AI) was a big topic last week at the World Economic Forum in Davos, Switzerland. The discussions were focused on its potential and opportunity, while raising questions of ethics and the societal use of AI in the future.
World Economic Forum Founder and Executive Chairman Professor Klaus Schwab stated, “We are living in a time of multiple technological innovations,” where AI is one of the key technologies that is driving the Fourth Industrial Revolution.
Even as AI takes center stage in this revolution, we have a long way to go before we reach its true potential. At Micron, we know that memory and storage innovations deliver dramatic improvements in moving, accessing and analyzing data. Combining the new techniques of AI with ever faster computing power and vast volumes of data results in computers that can learn new skills and transfer them to new applications quickly and with high quality.
Today, we are revealing data from a a Micron-commissioned study by Forrester Consulting that highlights how hardware architecture affects the return on investment for artificial intelligence and machine learning implementations. The research identifies the most critical factors necessary for optimal performance of advanced AI and machine learning analytics.
Although advanced analytics offer a great deal of promise for business transformation, most companies are only beginning to explore the execution challenges that complex AI and machine learning models bring. As use cases like image recognition, speech recognition and self-automation become more advanced, the hardware used to train and run those models will become increasingly important. To better understand the gaps and opportunities, Forrester surveyed IT and business professionals who manage architecture, systems and strategy for complex data.
The study identified several key trends and challenges:
- The location of compute and memory is crucial to performance and success when architecting hardware for AI and machine learning. Eighty-nine percent of respondents say it is important or critical that compute and memory are architecturally close together.
- Although 72 percent of firms run advanced on-premise analytics today, that percentage is expected to shrink to 44 percent in the next three years. Meanwhile, more firms will be running analytics in public clouds and at the edge. For example, 51 percent of respondents said they are running analytics in public clouds, which will increase to 61 percent in the next three years. And while 44 percent run analytics at the edge today, that will grow to 53 percent in by 2021.
- Of the possible hardware constraints limiting AI and machine learning today — including compute constraints, programmability and thermal management issues — memory and storage are the most commonly cited concerns. More than 75 percent of respondents recognize a need to upgrade or rearchitect their memory and storage to limit architectural constraints.
While those at Davos focused on the higher-level issues surrounding AI, this study shows that before we get there, we need to take a detailed look at compute, memory and storage configurations to enable the next-generation of AI.
The bottom line is that system architecture matters. Whether it’s at the edge, in the cloud or on premises, advanced hardware is necessary to deliver the performance that companies need to drive faster, better results with AI and machine learning analytics.
To learn more and receive the full study, register to attend the upcoming webinar “Hardware Matters – Why Memory and Storage Are Critical to Better AI and ML” on Tuesday, Feb. 5 at http://bit.ly/AIMatters.