DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving (2405.04390v1)
Abstract: Vision-centric autonomous driving has recently raised wide attention due to its lower cost. Pre-training is essential for extracting a universal representation. However, current vision-centric pre-training typically relies on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of autonomous driving as a 4D scene understanding task. In this paper, we address this challenge by introducing a world model-based autonomous driving 4D representation learning framework, dubbed \emph{DriveWorld}, which is capable of pre-training from multi-camera driving videos in a spatio-temporal fashion. Specifically, we propose a Memory State-Space Model for spatio-temporal modelling, which consists of a Dynamic Memory Bank module for learning temporal-aware latent dynamics to predict future changes and a Static Scene Propagation module for learning spatial-aware latent statics to offer comprehensive scene contexts. We additionally introduce a Task Prompt to decouple task-aware features for various downstream tasks. The experiments demonstrate that DriveWorld delivers promising results on various autonomous driving tasks. When pre-trained with the OpenScene dataset, DriveWorld achieves a 7.5% increase in mAP for 3D object detection, a 3.0% increase in IoU for online mapping, a 5.0% increase in AMOTA for multi-object tracking, a 0.1m decrease in minADE for motion forecasting, a 3.0% increase in IoU for occupancy prediction, and a 0.34m reduction in average L2 error for planning.
- Chen Min (17 papers)
- Dawei Zhao (22 papers)
- Liang Xiao (80 papers)
- Jian Zhao (218 papers)
- Xinli Xu (17 papers)
- Zheng Zhu (200 papers)
- Lei Jin (73 papers)
- Jianshu Li (34 papers)
- Yulan Guo (89 papers)
- Junliang Xing (80 papers)
- Liping Jing (33 papers)
- Yiming Nie (9 papers)
- Bin Dai (60 papers)