Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lane Change Decision-making through Deep Reinforcement Learning with Rule-based Constraints (1904.00231v2)

Published 30 Mar 2019 in cs.RO, cs.AI, cs.LG, and stat.ML

Abstract: Autonomous driving decision-making is a great challenge due to the complexity and uncertainty of the traffic environment. Combined with the rule-based constraints, a Deep Q-Network (DQN) based method is applied for autonomous driving lane change decision-making task in this study. Through the combination of high-level lateral decision-making and low-level rule-based trajectory modification, a safe and efficient lane change behavior can be achieved. With the setting of our state representation and reward function, the trained agent is able to take appropriate actions in a real-world-like simulator. The generated policy is evaluated on the simulator for 10 times, and the results demonstrate that the proposed rule-based DQN method outperforms the rule-based approach and the DQN method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Junjie Wang (164 papers)
  2. Qichao Zhang (27 papers)
  3. Dongbin Zhao (62 papers)
  4. Yaran Chen (23 papers)
Citations (109)