Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Embodied Understanding of Driving Scenarios (2403.04593v1)

Published 7 Mar 2024 in cs.CV

Abstract: Embodied scene understanding serves as the cornerstone for autonomous agents to perceive, interpret, and respond to open driving scenarios. Such understanding is typically founded upon Vision-LLMs (VLMs). Nevertheless, existing VLMs are restricted to the 2D domain, devoid of spatial awareness and long-horizon extrapolation proficiencies. We revisit the key aspects of autonomous driving and formulate appropriate rubrics. Hereby, we introduce the Embodied LLM (ELM), a comprehensive framework tailored for agents' understanding of driving scenes with large spatial and temporal spans. ELM incorporates space-aware pre-training to endow the agent with robust spatial localization capabilities. Besides, the model employs time-aware token selection to accurately inquire about temporal cues. We instantiate ELM on the reformulated multi-faced benchmark, and it surpasses previous state-of-the-art approaches in all aspects. All code, data, and models will be publicly shared.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yunsong Zhou (10 papers)
  2. Linyan Huang (6 papers)
  3. Qingwen Bu (15 papers)
  4. Jia Zeng (45 papers)
  5. Tianyu Li (101 papers)
  6. Hang Qiu (17 papers)
  7. Hongzi Zhu (14 papers)
  8. Minyi Guo (98 papers)
  9. Yu Qiao (563 papers)
  10. Hongyang Li (99 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.