18/02/2025
 - 4 min.

Safety Integrity of Machine Learning in Automotive Software

Integrating Machine Learning into automotive software development requires adhering to multiple regulations and standards to ensure a system’s safety, reliability, and compliance. Which standards are in place, what do they mean and what are their implications for the testing process?

  • Technology
two men look into the camera

The rapid development of automotive technology has led to an increased integration of Machine Learning into safety-critical systems. For example, in the context of advanced driver assistance systems and automated driving (ADAS/AD) functions, where computer vision is used to recognize traffic signs. When Machine Learning is incorporated in the development of safety-critical functions, intense and thorough testing is necessary to make these functions suitable for series development. This requires adhering to the following standards:

  • ISO 29119: Addresses software testing in general with a dedicated section for testing AI models.
  • ISO PAS 8800: Provides requirements and guidelines for AI systems in road vehicles. However, the standard does not define how these systems should achieve ASIL (Automotive Safety Integrity Levels) compliance. This absence of ASIL-specific guidelines for Machine Learning makes meeting automotive safety standards challenging.
  • ISO 26262: Describes the implementation of functional safety in automotive with regards to required ASIL but lacks specific guidelines for Machine Learning. It doesn’t cover lifecycle phases, testing methods, or safety requirements tailored to Machine Learning, creating a gap when applying it to AI-driven systems.
  • IEC 61508: Provides a comprehensive framework for functional safety of electrical, electronic, and programmable electronic systems and is a crucial standard that serves as the basis for numerous sector-specific functional safety standards.

Emil Gracić, Product & Process Owner for Safe AI Development, and Gregor Pawelke, AI Quality Specialist at CARIAD, have proposed a concept aiming to close the gaps occurring in the automotive standards ISO 26262 (Functional Safety) and ISO PAS 8800 (AI Safety).

Focusing on testing Machine Learning products in ADAS/AD functions, Emil and Gregor recommend three additional lifecycle phases: Prepare data, train model and deploy model. Defined properties, such as robustness, uncertainty handling, and interpretability are set for the outputs of each lifecycle phase. The properties are tested with required Machine Learning test methods. The more effective the test methods are to achieve the desired properties, the higher the ASIL achieved. This is based on IEC 61508, where a “rigor” value is assigned to test methods, which in turn reflects the effectiveness of achieving the desired properties. Emil explains:

h

Our concept supports product teams in achieving their safety goals and at the same time facilitates the certification process, reduces ambiguity and enhances the overall safety and reliability of Machine Learning-based automotive systems.

g
Emil Gracić / Product & Process Owner for Safe AI Development at CARIAD
Emil Gracić

Emil Gracić

Product & Process Owner for Safe AI Development at CARIAD

Emil and Gregor’s proposed concept is already being used by development teams at Volkswagen. “The AI testing concept helped us to systematically identify and document ‘white spots’ in the data selection for our ML model. By implementing it, we can ensure that the model operates within predictable and defined limits, which at the same time increases the understanding of how our algorithm behaves,” says Felix Stahl, ADAS Pre-Developer at Volkswagen. His colleague Jonas Kaste adds: “In addition, we are able to develop, test and evaluate scenario-dependent KPIs to provide objective criteria for release and reduce the need for time-consuming downstream testing in the vehicle.”

“We want to carry our understanding of AI safety beyond CARIAD and help to shape the state-of-the-art,” says Gregor. “The challenges we are trying to tackle occur in the entire automotive software industry and our approach can help with that.” 

Machine Learning (ML)

Is the process of optimizing model parameters through computational techniques, such that the model's behaviour aligns with data or experience and enables prediction beyond the training set. [SOURCE: ISO/IEC 22989:2022, 3.3.5, modified]. 

Artificial Intelligence (AI)

Artificial intelligence is the ability of a computer technology to mimic human abilities such as logical thinking, learning, planning, and creativity. AI is able to learn from data without the need to program specific instructions. AI enables technical systems to perceive their environment, deal with what is perceived, and solve problems in order to achieve a specific goal.

Computer Vision

Using Machine Learning in computer vision enables computers and systems to conduct information from images, videos or other visual input. On this basis, the system makes recommendations for action.

Automotive Safety Integrity Level (ASIL)

Classification system described in ISO 26262 for evaluating electric/electronic systems used in motor vehicles. Each level is associated with safety-relevant design principles which must be complied with to minimize the risk of failure.

CARIAD Media Team

CARIAD Media Team