Embodied Understanding of Driving Scenarios (2403.04593v1)
Abstract: Embodied scene understanding serves as the cornerstone for autonomous agents to perceive, interpret, and respond to open driving scenarios. Such understanding is typically founded upon Vision-LLMs (VLMs). Nevertheless, existing VLMs are restricted to the 2D domain, devoid of spatial awareness and long-horizon extrapolation proficiencies. We revisit the key aspects of autonomous driving and formulate appropriate rubrics. Hereby, we introduce the Embodied LLM (ELM), a comprehensive framework tailored for agents' understanding of driving scenes with large spatial and temporal spans. ELM incorporates space-aware pre-training to endow the agent with robust spatial localization capabilities. Besides, the model employs time-aware token selection to accurately inquire about temporal cues. We instantiate ELM on the reformulated multi-faced benchmark, and it surpasses previous state-of-the-art approaches in all aspects. All code, data, and models will be publicly shared.
- Yunsong Zhou (10 papers)
- Linyan Huang (6 papers)
- Qingwen Bu (15 papers)
- Jia Zeng (45 papers)
- Tianyu Li (101 papers)
- Hang Qiu (17 papers)
- Hongzi Zhu (14 papers)
- Minyi Guo (98 papers)
- Yu Qiao (563 papers)
- Hongyang Li (99 papers)