Between man and machine: Research for the future of automated driving
Using AI to model human driving behavior
Automated driving is one of the most complex challenges in the mobility industry. For vehicles to navigate urban traffic safely and efficiently, they must not only perceive their surroundings accurately but also reliably predict the behavior of other road users. Existing approaches already allow vehicles to react to other road users by considering their actions. However, these approaches often struggle to capture genuinely interactive behavior in complex traffic scenarios. This is the focus of an ongoing research project in the ADAS/AD domain at CARIAD. In this research project, PhD students work alongside experts to tackle those challenges.
Research Goal: Modeling Human Driving Behavior
The project aims to develop behavior models that realistically replicate human driving in highly interactive scenarios.
As part of their automated driving features, cars need a model of how their surrounding vehicles will move in the next few seconds. This short-term prediction is essential to plan a comfortable and safe route. But making accurate predictions is challenging because of the complexity of human driving behavior, and because of a circular dependency: The prediction of other’s depends on the behavior of your own vehicle.
Therefore, the project aims to develop behavior models that realistically replicate human driving in highly interactive scenarios. The models are then executed in the car to compare the effects of different plans on the behavior of surrounding vehicles.
The foundation is reinforcement learning (RL), an AI technology that optimizes models through reward signals. In a closed-loop simulation, algorithms are trained to maximize a reward function—such as staying in the lane while maintaining a target speed. To ensure realistic outcomes, a reward signal is reconstructed from real-world driving data, evaluating actions like steering angle and acceleration.
A key technical component is the graph-based representation of the driving environment, which enables flexible application of the models to diverse road layouts and traffic situations—regardless of the number and type of interacting participants. Additionally, the algorithms are optimized for low runtime to ensure real-time deployment in the vehicle.
Benefits for CARIAD and the Industry
The developed models have several advantages:
- Scene-Consistent predictions: By executing the learned behavior models in a microscopic closed-loop traffic simulation, the developers achieve scene-consistent predictions of how different traffic participants will behave and interact with each other. This allows your automated driving feature to evaluate different planning strategies by predicting the reactions of surrounding traffic participants. As a result, maneuvers can be planned more efficiently and smoother interactions with other road users are ensured.
- Efficient benchmarking: The models serve as an interactive test environment for planning algorithms, reducing the need for costly and time-consuming test drives.
- Adaptability and cost efficiency: Due to learning in a simulated environment, the models can be quickly adapted to new traffic situations without requiring extensive real-world data collection, thus significantly cutting development time and costs.
Technology with Real-World Impact
This research combines cutting-edge AI technologies with practical application. Driving in the real world involves operating within a complex and highly interactive environment, where every decision affects future situations. Among other challenges, this demands perception, scene understanding, reasoning, foresighted planning, as well as the ability to continuously adapt to changing environments.
Reinforcement Learning offers the ability to integrate all these aspects into one system. The results directly feed into prototype development and have the potential to accelerate the introduction and adoption of automated driving.