Helping self-driving cars become reliable chauffeurs
Why robustness is key for artificial intelligence in vehicle software.
The reason that algorithms can come to misleading conclusions is due to ‘classification’, an aspect of supervised learning techniques for artificial intelligence. Imagine you have a group of pictures that are all similar to each other, but nevertheless unique. Perhaps some pictures are of dogs and others are of cats. They’re all mammals that walk on four legs and have a head and a tail, but they’re still different. The differentiation between those two animals is referred to as a classification boundary. If you slightly modify one of the pictures of a cat so that it looks more like a dog, then AI recognition will cross that boundary and classify it as a dog.
To test classification and train robustness of an AI module, you can purposefully disrupt an image. For example, you can rotate it, color it in a different way, or simulate weather conditions. Training AI in this way allows it to consistently make the right decisions.
Context is key
But there are many more factors to be considered when ensuring robustness for self-driving vehicle AI. Consider time, for example: if a vehicle AI system doesn’t recognize a pedestrian within 5 milliseconds, then this might not be a problem. The system might not have to recognize them quite that quickly. But if it can’t recognize a pedestrian in 10 seconds, then this certainly is a problem. Time isn’t the only factor, though. The constellation of events, and its effect on correct labelling by the AI module, is also important. A system might be better at identifying a pedestrian walking across the road than it is at identifying a cyclist riding behind a car.
When determining robustness, it’s crucial that it is considered in the context of the overall safety of an overall function. Safety is not an abstract concept; it’s always a system property. On their own, neural networks and perception functions are never safe or unsafe because any errors they contain do not pose an immediate risk to the vehicle’s environment or occupants.
A key part of the safety process is the analysis and evaluation of performance limitations, i.e. possible sources of errors and their effects. This is the point where the robustness of neural networks is of great importance. The requirements for the degree of robustness are then based on the context – including the environment and driving situation – and on the safety goals in order to keep the residual risk of the overall function as small as possible. The aim is to maximize the number of conditions a self-driving vehicle software can handle with a high degree of robustness.
With all of these considerations, you can imagine that training a vehicle AI module is an extremely long and complex process. But it’s a necessary prerequisite to help self-driving vehicles become robust. Only in this way can they become our reliable chauffeurs of the future.