logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Autonomous Driving AI

The automotive segment is experiencing a very high rate of innovation. But what is the status of security and safety in autonomous vehicles?

Autonomous Driving AI

Today, the automotive segment is experiencing a very high rate of innovation. The design of the traditional vehicle is changing dramatically, both during the development phase and in the use and adoption of emerging technologies.

A very visible example of this trend is the use of machine learning and artificial intelligence (AI) to support the autonomous vehicle. The use of these new, emerging technologies is introducing new challenges—both in terms of safety and security; the effect is that the design and the validation of a vehicle can’t leverage lessons learned in the past.

OEMs, tier-ones and chains of automotive suppliers are facing new issues related to the safety and the security of a vehicle. These issues are exacerbated due to a vehicle’s connectivity to the external world as required to realize autonomous driving.

So what is the status of security and safety in autonomous vehicles?

Some standards have been created. The Road Vehicle – Functional Safety standard, ISO 26262, provides guidelines on how safety must be treated for any component used inside a vehicle. This standard represents the status of a vehicle’s safety and it is commonly used by component and application vendors.

The Society of Automotive Engineers (SAE International) provides guidelines on security with SAE J2601A, which provides indications and guidelines not in terms of components, but more on the system side.

Safety Implementations Using Cryptography in AI

Autonomous driving leverages the use of AI to solve problems that conventional software can’t address in a fast and easy manner. AI can be considered a totally new field for automotive developers because most previous experiences (lessons learned and continuous improvement) can’t be applied due to fundamental differences. Any AI system can be affected by issues coming from different sources:

  • Design: the training of the system can be accurate or not, and it can be affected by errors
  • Hardware: the memory (volatile and/or nonvolatile) can introduce errors due to physical defects and/or random errors

Earlier this month, the International Conference on Optimization and Decision Science (ODS) was held in Taormina, Italy. At this event, Micron delivered a presentation that focused on a new safety implementation that employs embedded attestation capability: Micron AuthentaTM technology.

AI in Automobiles

AuthentaTM technology uses a cryptographic algorithm to check the contents of the memory array while calculating a fingerprint of the data sent to it.

This new safety concept proposed during the conference used, as example, an AI system coupled with a safety hypervisor.

The AI system implemented two Authenta NOR memory devices. During the boot process, one of the Authenta NOR memory devices checked the integrity of the code stored in the memory before sending out data. The second Authenta NOR memory device, connected with the safety FPGA, checked data on-the-fly while the system RAM was loaded with AI parameters.

As explained during the conference, the role of the safety hypervisor differs when an application boots and when an application is in run-time execution. During run-time execution, the safety hypervisor checks the executed code in the DRAM; it does this offline using the HASH algorithm. During boot time, the safety hypervisor notifies the application controller about possible execution issues due to hardware and software errors introduced in the memory.

To summarize, the role of the hypervisor is to check the fingerprint of the data used during booting of the device and during run-time by comparing the stored digest with the one calculated on-the-fly while data is moved from the code area of the DRAM to the temporary execution area. The hypervisor must have the capability to understand the presence of a mismatch and provide feedback to the main application controller so counter-measures can be put into place.

In summary, this new safety concept provides an option to check if AI execution and elaboration is operating correctly, and if a hardware error is affecting output. While it does not prevent a system developer from training AI with incorrect or missed data, it is another step in the development of safety implementations using AI in autonomous vehicles.

Click here for more on Authenta™.

Click here for more on Micron Automotive Solutions.

About Our Blogger

Alberto Troia
Login or Sign Up Now for an account to leave a comment.