Papers
Topics
Authors
Recent
Search
2000 character limit reached

$AutoDrive\text{-}P^3$: Unified Chain of Perception-Prediction-Planning Thought via Reinforcement Fine-Tuning

Published 30 Mar 2026 in cs.RO and cs.CV | (2603.28116v1)

Abstract: Vision-LLMs (VLMs) are increasingly being adopted for end-to-end autonomous driving systems due to their exceptional performance in handling long-tail scenarios. However, current VLM-based approaches suffer from two major limitations: 1) Some VLMs directly output planning results without chain-of-thought (CoT) reasoning, bypassing crucial perception and prediction stages which creates a significant domain gap and compromises decision-making capability; 2) Other VLMs can generate outputs for perception, prediction, and planning tasks but employ a fragmented decision-making approach where these modules operate separately, leading to a significant lack of synergy that undermines true planning performance. To address these limitations, we propose ${AutoDrive\text{-}P3}$, a novel framework that seamlessly integrates $\textbf{P}$erception, $\textbf{P}$rediction, and $\textbf{P}$lanning through structured reasoning. We introduce the ${P3\text{-}CoT}$ dataset to facilitate coherent reasoning and propose ${P3\text{-}GRPO}$, a hierarchical reinforcement learning algorithm that provides progressive supervision across all three tasks. Specifically, ${AutoDrive\text{-}P3}$ progressively generates CoT reasoning and answers for perception, prediction, and planning, where perception provides essential information for subsequent prediction and planning, while both perception and prediction collectively contribute to the final planning decisions, enabling safer and more interpretable autonomous driving. Additionally, to balance inference efficiency with performance, we introduce dual thinking modes: detailed thinking and fast thinking. Extensive experiments on both open-loop (nuScenes) and closed-loop (NAVSIMv1/v2) benchmarks demonstrate that our approach achieves state-of-the-art performance in planning tasks. Code is available at https://github.com/haha-yuki-haha/AutoDrive-P3.

Summary

  • The paper introduces an innovative AutoDrive-P3 framework that interleaves chain-of-thought reasoning across perception, prediction, and planning using hierarchical reinforcement learning.
  • The authors leverage a custom P3-CoT dataset with frame-wise annotations to enable direct reward assignment and robust joint optimization of all cognitive modules.
  • Experimental results on nuScenes and NAVSIM benchmarks demonstrate significantly reduced collision rates and state-of-the-art planning performance in both detailed and fast modes.

Unified Chain-of-Perception, Prediction, and Planning with Reinforcement Fine-Tuning: An Expert Analysis of "AutoDrive-P3" (2603.28116)

Motivation and Problem Formulation

The paper introduces AutoDrive-P3, a Vision-LLM (VLM) framework targeting unified, interpretable, and synergistic end-to-end reasoning in autonomous driving. The authors identify two core limitations of existing VLM-based end-to-end approaches: (1) direct planning output without explicit chain-of-thought (CoT) reasoning, causing domain gaps and compromised robustness, and (2) fragmented modular reasoning in perception, prediction, and planning, which undermines inter-module synergy, leading to degraded planning reliability. The work posits that autonomous driving's safety, interpretability, and long-tail robustness rely critically on explicit, staged joint optimization of perception, prediction, and planning, thus requiring structured, progressive CoT supervision across all stages.

AutoDrive-P3 Architecture and Algorithmic Innovations

AutoDrive-P3 instantiates a three-module framework wherein perception, prediction, and planning are explicitly interleaved through CoT reasoning and supervised by hierarchical reinforcement learning. The pipeline proceeds as follows:

  • Structured Reasoning: The model operates on multimodal input (front-view video, ego-states, commands), generating sequential CoT in which perception informs prediction, and both feed into planning.
  • P3-CoT Dataset: A novel dataset synthesized to provide frame-wise, key-object centered CoT annotation, with tightly coupled perception, prediction, and planning sequences. This dataset enables unified multi-stage reasoning and direct reward assignment for reinforcement learning.
  • P3-GRPO Hierarchical Reinforcement Learning: The authors extend Group Relative Policy Optimization (GRPO)—originally applied only to planning—to hierarchical, reward-based fine-tuning encompassing all three cognitive modules. The total reward integrates perception (IoU/precision/recall for object detection), prediction (behavior accuracy weighted by IoU), and planning (L2 displacement and, e.g., composite PDMS metrics in NAVSIM environments).

This architecture is further enhanced by "dual thinking modes," enabling a tradeoff between detailed step-by-step interpretability (detailed mode) and fast, truncated greedy inference (fast mode) for deployment-time efficiency.

Experimental Validation and Strong Results

Experiments rigorously benchmark AutoDrive-P3 on nuScenes (open-loop) and NAVSIMv1/v2 (closed-loop) planning benchmarks. Empirically, AutoDrive-P3 achieves state-of-the-art (SOTA) performance, especially in collision rate:

  • nuScenes: At 3s horizon, collision rates are reduced to 0.06% (detailed) and 0.08% (fast), outperforming prior SOTA methods, with L2 displacement error on par or superior, even when using smaller model sizes and less training data.
  • NAVSIMv1/v2: AutoDrive-P3 achieves 90.6 (detailed) / 90.2 (fast) PDMS in v1 and 86.2 / 85.2 EPDMS in v2, both vision-only, surpassing methods that use additional modalities such as LiDAR or more massive models.

Ablation studies robustly demonstrate the necessity of staged, joint reinforcement fine-tuning (P3-GRPO): SFT alone or planning-only GRPO yields inferior perception and prediction rewards and higher planning failure rates. Further analysis reveals that increased group size (i.e., more diverse reasoning paths), use of historical trajectory inputs, and video (over images) all contribute positively to performance.

Rewards, Dataset, and Interpretability

Explicit reward assignment and joint optimization is a central strength. The authors’ reward decomposition ensures that perceptual improvements are not incidental but fundamental drivers of downstream planning. Their custom P3-CoT dataset, which strictly enforces dependencies between modules in each reasoning sequence, uniquely empowers GRPO to produce holistic driving policies. Extensive qualitative visualization and case studies confirm increased interpretability and safety: the model’s step-wise logic is exposed, and its actions can be audited for CoT hallucinations, failure points, or unsafe plans.

Practical and Theoretical Implications

On the practical side, AutoDrive-P3’s architecture yields a scalable blueprint for building next-generation interpretable VLM-based planners. The dual-modes approach enables practical deployment: real-time planning operation is possible at 1Hz on modern GPUs, with a secondary, interpretable detailed mode for auditing or failover. The explicit reward structure and CoT formatting also improve both the safety and transparency of neural driving policies, anticipating both regulatory and engineering requirements.

Theoretically, this work demonstrates the critical performance benefits of reward propagation to all reasoning modules, not only final planning outputs, in autoregressive multimodal architectures. It also makes a strong case for the necessity of structured, interdependent datasets (like P3-CoT) in pushing the capabilities of large-scale, long-tailed VLMs for safety-critical applications.

Limitations and Future Directions

The authors note that while AutoDrive-P3 achieves strong generalization and interpretability, hallucination and reasoning artifacts (typical in large VLMs) are not fully eliminated, and all reinforcement fine-tuning occurs in simulated/offline environments. Real-world deployment, addressing causal hallucination reduction, inference acceleration, and closed-loop interaction, remains open. Further, extending CoT formats and joint reward assignment to additional sensor modalities, traffic rule compliance, and multi-agent scenarios would be a logical progression.

Conclusion

AutoDrive-P3 provides an explicit, mathematically grounded demonstration that hierarchical, chain-of-thought supervised, joint reinforcement fine-tuning of perception, prediction, and planning modules yields significant advances in end-to-end autonomous driving. The framework achieves marked reductions in collision rates and enhances interpretability, setting a new standard for modular synergy in large-scale, safety-critical VLM applications. Future research directions include real-world deployment, hallucination mitigation, cross-modality extensions, and adaptive reward schemes for robust safe autonomy.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.