logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Evolving Neural Network Intelligence with Memory Solutions

Our brains carry out thousands of extremely sophisticated operations every day. Whether alerting us to a hot stove or identifying numbers and letters in a document, our autonomic nervous system manages complex functions more efficiently than any existing computer system. Computer systems may never reach human-brain-level function, but the advent of neural networks is closing the gap. By mimicking the connections of our neurons, neural networks can achieve new levels of artificial intelligence.

Neural Networks Mimic Human Neuron Clusters

For neural networks to function like human brains, quick analysis of the environment and recognition of contextual clues is necessary in order to act. Imagine how hard it would be for a robot to do what a human firefighter can do—put out fires or clear debris fields while interacting with first responders during an emergency. On top of all of that, picture that robot opening doors, turning off gas valves, and using a fire extinguisher. It seems nearly impossible to match the ease with which a human can perform these tasks—or, at the very least, incredibly niche and complex—but with neural network technology, the impossible can become reality.

For machines to perform these types of operations, they need to be trained and programmed to take in data, and process that data by moving from one decision making algorithm to the next until a conclusion is made. These strings of decision making algorithms are known as input layers. A compilation of input layers makes up a neural network, aptly named after the neurons which make up the human brain. These neural networks are inspired by the common clustered neuron structures in our brains that operate as a series of layers, interpreting environmental stimuli.

In an attempt to bring robotic processes closer to human brain performance, researchers are delving further into the study of the brain, mapping neurons and finding ways to optimize their computerized neural networks to complete complex tasks.

In the brain, electrical signals of activation pass through multiple feature detecting layers, collating the message each time to activate the correct neurons for response. This enables us to recognize shapes, patterns, and features of the physical world, and respond accordingly. This process takes milliseconds—recognition happens nearly instantaneously.

Compare this to how a computerized neural network functions. Neural networks take in sensory data from electrical receptors like cameras, radar, lidar, gyroscopes and accelerometers and filter them through their own input layers—a series of coded algorithms designed to mimic human neural function. The collected data is sorted, scaled, and the result is sent through a series of decision algorithms as it passes each neural layer. Once it reaches the output layer, a final decision which mimics a human reaction and course of action, is made.

The speed at which micro-decisions and sensory data collation occur within computerized neural networks is approaching the speeds of a human brain for very specific tasks. More complex decisions, requiring contextual clues and subtleties are still difficult for neural networks to process; humans certainly have the edge. As the speed at which data moves within this neural network increases, and the more complex the decisions layers become; the closer neural networks will be to reaching the efficiency of human brain function in the future.

Memory Lets Neural Networks Make Their Own Decisions

The increase in speed hinges upon the storage of data, as well as the ability to access the data with incredible bandwidth so that an AI algorithm can sort through it. This process requires high-speed memory like Micron’s GDDR6 technology, which helps these computerized neural networks make decisions as fast as possible.

Micron’s Cloud Segment Customer Program Manager Gregg Wolf, an expert in how memory can make speedy neural networks possible, thinks that the future is bright for their lasting impact. Wolff explains that luminaries in the field of neural networks. “…liken the AI neural network revolution to the turn of the century when electricity was introduced. AI, like electricity, is going to completely change how certain industries process and use the information that's available to them.”

A neural network’s decision-making algorithms require intensive mathematical processes and data analysis, both of which increase the need for faster memory and memory storage. This is especially important in the cloud at hyperscale data centers, where Micron GDDR devices perform key roles in GPU-based big data processing; and Micron’s portfolio of DRAM and SSDs speed up the flow of data throughout.

“There's just a huge volume of data that's flowing in through the data center, and it's very difficult for humans to go and define all the features and all the code and pass all that data back and forth,” Wolff said.

“A lot of information can flow through neural networks and scalable neural networks with high-performance hardware, which allows folks to extract value out of that information as close to real time as possible.”

Just as your brain reads the many signals your body sends its way, identifying the signal from your hand that says the oven is hot, a neural network can read the mass of data points from a camera and note exactly where a robot will have to perform cleanup duty.

GDDR5 and GDDR6 Take Neural Networks to the Next Level

If you want to speed up your brain, you need to improve your memory recall—there are endless apps, sudoku workbooks, and other tools available for people who want to increase their reaction speed. Similarly, neural network capability scales with processing performance. Micron pushes the performance envelope, continually developing faster and faster DRAM, NAND flash, and GDDR devices. GDDR5 and GDDR6 are the technology of choice for GPU-based graphic cards used in neural networks. GDDR6 takes this performance to even higher levels, with up to twice the memory bandwidth of GDDR5. As of June 2018, this extra-efficient and powerful memory has entered mass production for high-performance applications.

“Over time, everyone’s going to try to find that right hardware footprint, and memory is a very important piece of this process,” Wolff explained.

“Micron is committed to finding value-added solutions that service the particularly high-bandwidth needs of neural network training and inference deployments.”

The future is in neural networks; as computers begin to act more like humans, they will form the underlying intelligent fabric of our lives at high levels of speed and efficiency. All the while, Micron will power this revolution, enabling those networks for good, like matching the skills of a first responder, identifying debris on the ground or fires in buildings. As we expand the capabilities of computer systems to new heights and push the boundaries of innovation, we should remember that these advancements are the result of the sheer power and design of the human brain.