Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SparseDrive: End-to-End Autonomous Driving via Sparse Scene Representation (2405.19620v2)

Published 30 May 2024 in cs.CV

Abstract: The well-established modular autonomous driving system is decoupled into different standalone tasks, e.g. perception, prediction and planning, suffering from information loss and error accumulation across modules. In contrast, end-to-end paradigms unify multi-tasks into a fully differentiable framework, allowing for optimization in a planning-oriented spirit. Despite the great potential of end-to-end paradigms, both the performance and efficiency of existing methods are not satisfactory, particularly in terms of planning safety. We attribute this to the computationally expensive BEV (bird's eye view) features and the straightforward design for prediction and planning. To this end, we explore the sparse representation and review the task design for end-to-end autonomous driving, proposing a new paradigm named SparseDrive. Concretely, SparseDrive consists of a symmetric sparse perception module and a parallel motion planner. The sparse perception module unifies detection, tracking and online mapping with a symmetric model architecture, learning a fully sparse representation of the driving scene. For motion prediction and planning, we review the great similarity between these two tasks, leading to a parallel design for motion planner. Based on this parallel design, which models planning as a multi-modal problem, we propose a hierarchical planning selection strategy , which incorporates a collision-aware rescore module, to select a rational and safe trajectory as the final planning output. With such effective designs, SparseDrive surpasses previous state-of-the-arts by a large margin in performance of all tasks, while achieving much higher training and inference efficiency. Code will be avaliable at https://github.com/swc-17/SparseDrive for facilitating future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wenchao Sun (8 papers)
  2. Xuewu Lin (10 papers)
  3. Yining Shi (21 papers)
  4. Chuang Zhang (78 papers)
  5. Haoran Wu (18 papers)
  6. Sifa Zheng (17 papers)
Citations (7)

Summary

SparseDrive: Enhancing Autonomous Driving with Sparse Scene Representation

The paper "SparseDrive: End-to-End Autonomous Driving via Sparse Scene Representation" introduces a novel approach to autonomous driving that addresses critical limitations in existing end-to-end paradigms. The traditional modular systems for autonomous vehicles, consisting of discrete perception, prediction, and planning tasks, inherently suffer from information loss and error accumulation. The authors propose SparseDrive, an innovative end-to-end framework that unifies these tasks into a cohesive, fully differentiable system leveraging sparse scene representation.

Key Contributions and Methodology

The authors of SparseDrive emphasize the shortcomings of the prevailing BEV-centric methods that depend heavily on computationally intensive features and simplistic designs. Their solution, SparseDrive, consists of two main components: a symmetric sparse perception module and a parallel motion planner.

  • Symmetric Sparse Perception Module: This module integrates detection, tracking, and online mapping tasks using a symmetric model architecture. By learning a sparse representation of the scene, it efficiently captures both dynamic and static elements of the environment.
  • Parallel Motion Planner: Recognizing the inherent similarities between motion prediction and planning, the authors propose a parallel design where both tasks are performed concurrently. It models planning as a multi-modal problem with a hierarchical planning selection strategy that includes a collision-aware rescore module. This ensures the chosen trajectory is both rational and safe, addressing significant safety concerns.

Experimental Results

The experimental evaluation on the nuScenes dataset demonstrates that SparseDrive substantially surpasses existing state-of-the-art methods across all tasks, including detection, tracking, mapping, motion prediction, and planning.

  • Performance Metrics: SparseDrive-B, the base model of the framework, reduces the average L2 error by 19.4% and the collision rate by 71.4% compared to the previous best method, UniAD. SparseDrive-S, a smaller model, also outperforms in all tasks while maintaining superior training and inference efficiency.
  • Efficiency: SparseDrive achieves training speeds up to 7.2 times faster and inference speeds 5 times quicker than previous methods, marking a significant advancement in computational efficiency.

Implications and Future Directions

SparseDrive presents a comprehensive and efficient solution to the challenges faced by end-to-end autonomous driving systems. The sparse scene representation not only enhances performance but also significantly reduces computational demands. The paper's contributions are crucial for advancing autonomous vehicle technologies towards safer and more reliable real-world deployment.

Moving forward, addressing the paper’s limitations includes bridging the performance gap with single-task methods, especially in tasks like online mapping. Additionally, the dataset scale and evaluation metrics could be expanded to further stress-test the capabilities of SparseDrive under varied and complex real-world scenarios. This research sets a promising foundation for future developments in integrating perception, prediction, and planning tasks in autonomous driving.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub