Linley Fall 2018 Processor conference took place on Halloween. There was nothing too scary happening unless you are afraid of performance requirements for moving information or needing to run faster than ever for markets like automotive, industrial internet of things (IIoT) and data centers (cloud). Day one was full of interesting views, but it comes back to the ability to move data around and the requirement for memory to support the processing to turn data into information, thus accelerating intelligence.
Linley Gwennap kicked off the morning, using artificial intelligence (AI) as the example for architectures that will need to change to deliver performance for the future of neural networks being created. He outlined how the coming machine learning and deep learning workloads will stress traditional processing architectures.
Linley introduced the market examples of what NVIDIA is doing in automotive with Xavier, how IIoT will become adopted in consumer, (driving volume & adoption up) and how data centers are pushing the interface to PAM4 at 400G aided by the broader adoption of smart network interface controller (NIC) cards to offset the expense of costly processing to accommodate.
In the next session, Kevin Deierling VP of Marketing at Mellanox talked about how smart NIC can improve the security of data center using “Bluefield”. He equated the old/new ways of protecting to Halloween candy. It used to be like “M&Ms” which are hard on the outside and soft on the inside. In the new data center with many different workloads, we now need to move to the “JAWBREAKER” candy model which is hard on the outside and hard on the inside. This was a great setup for Salman Jiva of Micron to follow up.
Salman Jiva, Micron Senior Business Manager
Then, moving to the value of memory options and how this is supported with memory. Salman Jiva, Senior Business Development Manager, provided an intriguing discussion around improving the server efficiency by optimizing the network interface and its memory. He outlined the market trends, the need for speed in IO and where standard server memories are trending. Salman outlined what the “Modern cloud” looks like and what the applications running will require. He then introduced the value of “Smart NIC’s” allowing for nimble solutions like packet forwarding which enables the CPU to focus on the processing it needs to execute; a more efficient CPU and more offloaded to the smart NIC. Salman outlines the benefits of memory options and compared requirements of 400G network bandwidth when using DDR4 vs GDDR6.
Servers still need density where smart NIC needs higher bandwidth, with a thought on cost, implementation and power. GDDR6 addresses the power, area for NICs and the total cost of ownership. Not too scary for Halloween but eye-opening trends for memory and processing architectures to meet the coming demand for AI. It is all about accelerating intelligence to meet power and total cost of ownership.
Day two opened with a riveting energetic presentation from Google AI leader, Cliff Young. Cliff Young led the discussion on the tensor processing unit (TPU) architecture history and the inference vs training challenges. He pointed out the number of papers, projects around AI indicating the exponential growing interest and conversations revolving around AI.
These were just a few of the presentations reviewed at Linley. Other companies and topics included Synopsys on IP for comparing LiDAR and radar, Arm on the new Auto IP Core they have released, RAMBUS on memory systems for AI, and Ryan Baxter of Micron on AI shaping the next generation of memory solutions.
Ryan Baxter, Micron Director of Cloud & Networking
The Linley conference did a great job of covering the critical concerns of safety, security, power, performance with very good dialogue and sometimes controversial approaches around the compute architectures. One common theme is the need for appropriate bandwidth of memory and storage to “feed the beasts” in applications like AI to accelerate intelligence.