"The battle for leadership in providing the compute platform for automotive artificial intelligence and self-driving car applications continues to intensify. Relatively recent entrants in the automotive market (in automotive terms) such as Intel, Qualcomm, NVIDIA and Samsung have been taking the fight to established players such as NXP (now in the process of being acquired by Qualcomm), Texas Instruments, Renesas, Xilinx and others. In recent interactions with other semiconductor processor vendors, Strategy Analytics has seen a shift in their messaging around automated driving. It seems like the talk, that used to center mainly on the processing/brain part of the automated driving problem. It has now shifted to a more holistic and integrated positioning across sensing, on-board processing, communication and cloud. This new automotive “Quad-play” is perhaps a natural move for the likes of Intel and Qualcomm, which either have existing expertise in many of these spaces or are in the process of acquiring more pieces of the jigsaw puzzle." 1
However, many of the automotive semiconductor process vendors are missing a critical piece of the autonomous puzzle – memory. As more autonomy is being integrated in the car, ranging from Level 2, which provides primarily driver assistance, through Level 5, which provide fully autonomous capabilities, a majority of today’s memory solutions will be highly taxed to achieve the requisite bandwidth requirements for these higher levels.
Currently automobiles require DRAM memory solutions with bandwidths of less than 60GB/s. For Systems targeting the 2020 model year cars that are in development now are using x32 LPDRAM components at I/O signaling speeds up to 4266Mb/s. For the 2020 and even 2023 model year designs that are currently in development are utilizing LPDRAM4. Even with Moore’s Law, which has been on a fairly predictable and linear trajectory for the past several decades, doubling transistor count roughly every two years, we are seeing a significant memory bandwidth gap developing that are driving the need for new system architectures or new memory technologies.
It is estimated that that ADAS applications would require 512GB/s – 1024 GB/s bandwidth to support Level 3 and 4 autonomous driving capabilities. Micron is working closely with our partners in the automotive industry to define and develop memory solutions to support this critical gap. The two contenders to meet this bandwidth needs are GDDR6 and HBM2. Micron considers GDDR6 the best alternative in solving the reliability and temperature range constraints the automotive industry requires. Whereas HBM2 is highly integrated (4 high or 8 high stacked, in-package in Si interposer), which allows for smaller footprint solutions. HBM2, however it is predicted to be significantly more expensive due to stacking and silicon interposer which may require additional resource to ensure reliability within the harsh, automotive environment. GDDR memory is a JEDEC standard based memory which currently is being mass produced for the gaming and video graphics applications. Micron plans to leverage its strength in graphics memory along with its leadership in the automotive market to bring this next generation technology to market.
To make the autonomous market reality, Quad-play’ needs to change to ‘Cinco-play,’ where memory is included in the holistic and integrated view of the future autonomous cars.
1 Bosch Launches NVIDIA-based AI Computer for Automated Driving