If you've been following technology trends at all, it's easy to see that Artificial Intelligence (AI) and Machine Learning (ML) are currently all the rage. But why? People have been talking about AI for decades, what changed such that it's become so popular?
To answer that question, we're going to go back a bit for some context.
While it may seem like AI is a new thing, the first real usage of the term "Artificial Intelligence" was in 1956 (at the Dartmouth Summer Research Project on Artificial Intelligence--a small workshop with about 50 attendees). Through the 1950s, 1960s and 1970s large bodies of research were done that laid the foundation for the processes in use today (Bayesian methods, heuristics, semantic nets, perceptrons, backpropagation).
Unfortunately, after capturing the public's attention with the wonderful things AI would be able to do, we entered an "AI winter" as people found that the promises of AI weren't being kept. This is partly due to the limitations of the algorithms of the time as AI programs didn't learn from previous examples but rather needed to be programmed by developers with deep knowledge of the specific domain. Work and research was still being done, but the results weren't as engaging as the initial vision.
Fast forward to the 1990s, and Machine Learning starts picking up steam. We see several examples of autonomous cars, IBM's Deep Blue defeats chess world champion Garry Kasparov, and web crawlers become essential for using the World Wide Web.
Through the 1990s to the early 2010s, Machine Learning flourished. The MNIST dataset allows character recognition of handwritten numbers. Netflix launched a competition for a machine learning algorithm that could beat its own recommendation software. The ImageNet dataset and related competition drove advances in computer vision. The website Kaggle was launched to host machine learning competitions.
In 2007 and 2009, NVIDIA's CUDA and the OpenCL framework were released. These allowed the use of graphics processing units (GPUs) for general-purpose computing – not just graphics-intensive games – which greatly expanded the available hardware for highly parallelizable tasks.
Alongside the advances in software and algorithms, there were huge advances in hardware.
The first CUDA-enabled general-purpose GPU was the NVIDIA® Tesla® C870, released in 2007 as part of Nvidia's Tesla microarchitecture and using GDDR3 high-performance memory. It was revolutionary at the time and provided 0.52 Tflops of compute and 77 GB/s of memory bandwidth. Nvidia's newest card – the Tesla T4 GPU Accelerator with Micron® GDDR6 memory – provides an astounding 8.1 Tflops and 320 GB/s of memory bandwidth. Amazingly, power consumption has actually decreased from the C870 to the T4 from 170 Watts to 70 Watts.
Alternatively, if we look at the Top500 list of supercomputers as an indicator of available compute performance we see some amazing trends. Over the past 10 years, the mean performance of the list is 9,740% faster today than it was in 2008. In the 10 years prior to that we saw a 53,880-percent improvement. Today, a desktop computer can be assembled for a few thousand dollars that matches the mean performance of the list in 2008 (25 Tflops).
Furthermore, the ability to feed data to these powerful accelerators has also increased, due to the development of NAND-based solid state drives (SSDs). In 2009, Micron released our first SSD, the Crucial™ RealSSD C300. At $799 for a 256 GB device, the performance / dollar was fantastic for the time. It was one of the fastest drives on the market at 350 MB/s Reads and 215 MB/s Writes. Nearly 10 years later and the landscape has changed drastically. In the performance realm we have the 9200 NVMe drive that can do 4.6 GB/s Reads and 3.8 GB/s Writes. On the capacity side, we have the industry's first enterprise Quad Layer Cell (QLC) drive − the 5210 ION SSD. The improved density of QLC along with 3D NAND and CMOS-under-the-array technology has driven SSD prices below 20¢/GB, making fast storage accessible for all applications.
Now we're getting closer to the answer of "Why now?"
When people today talk about the great things AI is doing, they're often talking about the advances in Deep Learning. Deep Learning requires significantly more compute power than the previous Machine Learning algorithms. Training high accuracy models requires that each piece of data get processed by going through multiple layers of mathematical functions.
One of the top image recognition models uses 152 neural network layers and requires more than 10 billion operations per image. Each image will be processed thousands of times and there are more than a million images in one of the standard training image sets (ImageNet).
These new models are only viable due to the amazing increases in compute availability achieved in the past few years. As models took advantage of GPGPUs to accelerate training performance, the industry responded with hardware designed specifically for Deep Learning tasks. With the additional compute performance available, the models could become even more complex. If we compare the training speed of CPUs vs GPUs, it's easy to see just how large of an impact this has.
In this image recognition benchmark, the GPU-based system is able to process almost 400 images/second with a single GPU while the CPU − a 20-core processor − only achieves 9 images/second. Additionally, the number of GPUs in a system can be scaled. We tested up to four GPUs and showed near linear scaling for this benchmark. The four GPUs are 160 TIMES faster than the single CPU.
In our current Cloud Age, GPU-enabled systems can be spun up with any of the major cloud providers at a moment's notice and used for just as long as necessary. Supercomputer performance has never been more accessible. This is one of the biggest drivers of "Why now?"
And while there are amazing things happening right now, I'm even more excited for what the next five years will bring.
Learn more at www.micron.com.
Stay up to date by following us on Twitter @MicronStorage and connect with us on Linkedin.