traffic, and testing in safety-critical scenarios is essential for identifying scenarios that autonomous vehicles cannot handle. Given the rarity of safety-critical scenarios, it is necessary to investigate how to systematically generate these scenarios. In this paper, we propose an adversarial method to efficiently generate safety-critical scenarios through deep reinforcement learning. We first formulate the typical lane-change scenarios based on the Markov Decision Process and then train the background vehicles to aggressively interfere with the autonomous vehicle under test and create some risky situations. We also propose a reasonableness reward to avoid the extreme adversarial behavior of the background vehicles, making the scenarios reasonable and informative for the autonomous vehicle testing. Simulation results show that the generated scenarios are more critical in terms of safety than those in the naturalistic environment, significantly degrading the performance of the vehicle under test and providing a basis for improving the model.
下面为本研究的简介,详细内容请参阅论文原文
If you find our work is useful in your research, please consider citing:
@INPROCEEDINGS{10422684,
author={He, Zimin and Zhang, Jiawei and Yao, Danya and Zhang, Yi and Pei, Huaxin},
booktitle={2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)},
title={Adversarial Generation of Safety-Critical Lane-Change Scenarios for Autonomous Vehicles},
year={2023},
volume={},
number={},
pages={6096-6101},
keywords={Deep learning;Performance evaluation;Simulation;Reinforcement learning;Safety;Autonomous vehicles;Testing},
doi={10.1109/ITSC57777.2023.10422684}}