VLP: Vision Language Planning for Autonomous Driving (2401.05577v4)
Abstract: Autonomous driving is a complex and challenging task that aims at safe motion planning through scene understanding and reasoning. While vision-only autonomous driving methods have recently achieved notable performance, through enhanced scene understanding, several key issues, including lack of reasoning, low generalization performance and long-tail scenarios, still need to be addressed. In this paper, we present VLP, a novel Vision-Language-Planning framework that exploits LLMs to bridge the gap between linguistic understanding and autonomous driving. VLP enhances autonomous driving systems by strengthening both the source memory foundation and the self-driving car's contextual understanding. VLP achieves state-of-the-art end-to-end planning performance on the challenging NuScenes dataset by achieving 35.9\% and 60.5\% reduction in terms of average L2 error and collision rates, respectively, compared to the previous best method. Moreover, VLP shows improved performance in challenging long-tail scenarios and strong generalization capabilities when faced with new urban environments.
- Chenbin Pan (6 papers)
- Burhaneddin Yaman (30 papers)
- Tommaso Nesti (8 papers)
- Abhirup Mallik (9 papers)
- Alessandro G Allievi (1 paper)
- Senem Velipasalar (61 papers)
- Liu Ren (57 papers)