Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model-free Deep Reinforcement Learning for Urban Autonomous Driving (1904.09503v2)

Published 20 Apr 2019 in cs.LG, cs.AI, cs.CV, and cs.RO

Abstract: Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly manually designing the driving policy, which might result in sub-optimal solutions and is expensive to develop, generalize and maintain at scale. On the other hand, with reinforcement learning (RL), a policy can be learned and improved automatically without any manual designs. However, current RL methods generally do not work well on complex urban scenarios. In this paper, we propose a framework to enable model-free deep reinforcement learning in challenging urban autonomous driving scenarios. We design a specific input representation and use visual encoding to capture the low-dimensional latent states. Several state-of-the-art model-free deep RL algorithms are implemented into our framework, with several tricks to improve their performance. We evaluate our method in a challenging roundabout task with dense surrounding vehicles in a high-definition driving simulator. The result shows that our method can solve the task well and is significantly better than the baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jianyu Chen (69 papers)
  2. Bodi Yuan (6 papers)
  3. Masayoshi Tomizuka (261 papers)
Citations (247)

Summary

Model-free Deep Reinforcement Learning for Urban Autonomous Driving

The paper presents a novel approach to tackle the complex issue of decision-making in urban autonomous driving by leveraging model-free deep reinforcement learning (RL). The primary focus is on developing an autonomous driving policy that efficiently navigates through intricate urban road geometries and multi-agent interactions without the dependence on manually designed models, which often suffer from limited accuracy and generalizability.

Contribution and Methodology

The authors propose an innovative framework that facilitates the application of model-free deep RL to autonomous driving challenges. The framework introduces a specific input representation: a bird-view image that amalgamates essential information about the environment while reducing input complexity. This includes map data, routing, historical information on detected objects, and ego states, which provide a simplified yet comprehensive view of the driving scenario.

A crucial component of this framework is the integration of a variational auto-encoder (VAE) to encode these bird-view observations into a low-dimensional latent state. This encoding significantly reduces the sample complexity necessary for learning an effective driving policy.

Three contemporary model-free deep RL algorithms are incorporated into the framework: Double Deep Q-Network (DDQN), Twin Delayed Deep Deterministic Policy Gradient (TD3), and Soft Actor Critic (SAC). The authors apply several modifications, such as exploration strategies tailored to the specific challenges of urban driving scenarios, to enhance the performance of these algorithms.

Experimental Evaluation

The efficacy of the proposed method is demonstrated using CARLA, a high-definition simulator known for its diverse and challenging driving environments. Two scenarios are evaluated: one with no surrounding vehicles and another with dense traffic conditions at a complex roundabout. These test cases rigorously assess the framework's capability to handle various urban driving complexities.

Results indicate that the SAC algorithm consistently outperforms DDQN and TD3 in terms of both learning speed and the robustness of the learned policy. The SAC-equipped agent achieved success in navigating the roundabout and demonstrated superior adaptability to the dense traffic environment, reaching a final goal point with a high success rate. However, the tested methods exhibited limitations, particularly in handling collisions, suggesting potential improvements by incorporating velocity data and refining input representations for complex driving environments.

Implications and Future Work

The research highlights the potential of leveraging model-free deep RL in the field of urban autonomous driving. The proposed method successfully circumvents the limitations of traditional model-based approaches, offering a framework that can adapt and optimize under varied driving scenarios.

Looking forward, further exploration could involve improving current limitations related to collision avoidance, possibly through the integration of additional sensory data. The authors also suggest that with increased computational resources, the capacity of these RL methods could be pushed further to achieve higher levels of efficiency and adaptability. Moreover, efforts to generalize the framework to other urban driving scenarios could significantly enhance its applicability and drive advancements in autonomous vehicle technology across more diverse and complex environments.

This paper contributes valuable insights into model-free deep RL as a promising direction for autonomous driving. It provides both a thorough examination of existing methods and a refined approach that combines innovative elements to address current challenges effectively, paving the way for future research in the field.