Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DDM-Lag : A Diffusion-based Decision-making Model for Autonomous Vehicles with Lagrangian Safety Enhancement (2401.03629v2)

Published 8 Jan 2024 in cs.RO

Abstract: Decision-making stands as a pivotal component in the realm of autonomous vehicles (AVs), playing a crucial role in navigating the intricacies of autonomous driving. Amidst the evolving landscape of data-driven methodologies, enhancing decision-making performance in complex scenarios has emerged as a prominent research focus. Despite considerable advancements, current learning-based decision-making approaches exhibit potential for refinement, particularly in aspects of policy articulation and safety assurance. To address these challenges, we introduce DDM-Lag, a Diffusion Decision Model, augmented with Lagrangian-based safety enhancements. This work conceptualizes the sequential decision-making challenge inherent in autonomous driving as a problem of generative modeling, adopting diffusion models as the medium for assimilating patterns of decision-making. We introduce a hybrid policy update strategy for diffusion models, amalgamating the principles of behavior cloning and Q-learning, alongside the formulation of an Actor-Critic architecture for the facilitation of updates. To augment the model's exploration process with a layer of safety, we incorporate additional safety constraints, employing a sophisticated policy optimization technique predicated on Lagrangian relaxation to refine the policy learning endeavor comprehensively. Empirical evaluation of our proposed decision-making methodology was conducted across a spectrum of driving tasks, distinguished by their varying degrees of complexity and environmental contexts. The comparative analysis with established baseline methodologies elucidates our model's superior performance, particularly in dimensions of safety and holistic efficacy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Z. Wang, C. Lv, and F.-Y. Wang, “A new era of intelligent vehicles and intelligent transportation systems: Digital twins and parallel intelligence,” IEEE Transactions on Intelligent Vehicles, 2023.
  2. P. Hang, Y. Zhang, N. de Boer, and C. Lv, “Conflict resolution for connected automated vehicles at unsignalized roundabouts considering personalized driving behaviours,” Green Energy and Intelligent Transportation, vol. 1, no. 1, p. 100003, 2022.
  3. J. Liu, D. Zhou, P. Hang, Y. Ni, and J. Sun, “Towards socially responsive autonomous vehicles: A reinforcement learning framework with driving priors and coordination awareness,” IEEE Transactions on Intelligent Vehicles, 2023.
  4. B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 6, pp. 4909–4926, 2021.
  5. F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  6. L. Yang, Z. Zhang, Y. Song, S. Hong, R. Xu, Y. Zhao, W. Zhang, B. Cui, and M.-H. Yang, “Diffusion models: A comprehensive survey of methods and applications,” ACM Computing Surveys, vol. 56, no. 4, pp. 1–39, 2023.
  7. Z. Wang, J. J. Hunt, and M. Zhou, “Diffusion policies as an expressive policy class for offline reinforcement learning,” arXiv preprint arXiv:2208.06193, 2022.
  8. C. Lu, P. J. Ball, and J. Parker-Holder, “Synthetic experience replay,” arXiv preprint arXiv:2303.06614, 2023.
  9. H. He, C. Bai, K. Xu, Z. Yang, W. Zhang, D. Wang, B. Zhao, and X. Li, “Diffusion model is an effective planner and data synthesizer for multi-task reinforcement learning,” arXiv preprint arXiv:2305.18459, 2023.
  10. D. M. Saxena, S. Bae, A. Nakhaei, K. Fujimura, and M. Likhachev, “Driving in dense traffic with model-free reinforcement learning,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 5385–5392.
  11. G. Li, S. Li, S. Li, Y. Qin, D. Cao, X. Qu, and B. Cheng, “Deep reinforcement learning enabled decision-making for autonomous driving at intersections,” Automotive Innovation, vol. 3, pp. 374–385, 2020.
  12. L. Wen, D. Fu, X. Li, X. Cai, T. Ma, P. Cai, M. Dou, B. Shi, L. He, and Y. Qiao, “Dilu: A knowledge-driven approach to autonomous driving with large language models,” arXiv preprint arXiv:2309.16292, 2023.
  13. L. Chen, O. Sinavski, J. Hünermann, A. Karnsund, A. J. Willmott, D. Birch, D. Maund, and J. Shotton, “Driving with llms: Fusing object-level vector modality for explainable autonomous driving,” arXiv preprint arXiv:2310.01957, 2023.
  14. W. Schwarting, J. Alonso-Mora, and D. Rus, “Planning and decision-making for autonomous vehicles,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 1, pp. 187–210, 2018.
  15. J. Liu, P. Hang, J. Wang, J. Sun et al., “Mtd-gpt: A multi-task decision-making gpt model for autonomous driving at unsignalized intersections,” arXiv preprint arXiv:2307.16118, 2023.
  16. B. B. Elallid, N. Benamar, A. S. Hafid, T. Rachidi, and N. Mrani, “A comprehensive survey on the application of deep and reinforcement learning approaches in autonomous driving,” Journal of King Saud University-Computer and Information Sciences, vol. 34, no. 9, pp. 7366–7390, 2022.
  17. H.-n. Wang, N. Liu, Y.-y. Zhang, D.-w. Feng, F. Huang, D.-s. Li, and Y.-m. Zhang, “Deep reinforcement learning: a survey,” Frontiers of Information Technology & Electronic Engineering, vol. 21, no. 12, pp. 1726–1744, 2020.
  18. S. Gu, L. Yang, Y. Du, G. Chen, F. Walter, J. Wang, Y. Yang, and A. Knoll, “A review of safe reinforcement learning: Methods, theory and applications,” arXiv preprint arXiv:2205.10330, 2022.
  19. J. Garcıa and F. Fernández, “A comprehensive survey on safe reinforcement learning,” Journal of Machine Learning Research, vol. 16, no. 1, pp. 1437–1480, 2015.
  20. V. S. Borkar, “An actor-critic algorithm for constrained markov decision processes,” Systems & control letters, vol. 54, no. 3, pp. 207–213, 2005.
  21. F. Berkenkamp, M. Turchetta, A. Schoellig, and A. Krause, “Safe model-based reinforcement learning with stability guarantees,” Advances in neural information processing systems, vol. 30, 2017.
  22. M. Janner, Y. Du, J. B. Tenenbaum, and S. Levine, “Planning with diffusion for flexible behavior synthesis,” arXiv preprint arXiv:2205.09991, 2022.
  23. A. Ajay, Y. Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal, “Is conditional generative modeling all you need for decision-making?” arXiv preprint arXiv:2211.15657, 2022.
  24. M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas, “Learning to learn by gradient descent by gradient descent,” Advances in neural information processing systems, vol. 29, 2016.
  25. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  26. S.-i. Amari, “Backpropagation and stochastic gradient descent method,” Neurocomputing, vol. 5, no. 4-5, pp. 185–196, 1993.
  27. A. Stooke, J. Achiam, and P. Abbeel, “Responsive safety in reinforcement learning by pid lagrangian methods,” in International Conference on Machine Learning.   PMLR, 2020, pp. 9133–9143.
  28. Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, and B. Zhou, “Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 3, pp. 3461–3475, 2022.
  29. S. Ha, P. Xu, Z. Tan, S. Levine, and J. Tan, “Learning to walk in the real world with minimal human effort,” arXiv preprint arXiv:2002.08550, 2020.
  30. S. Fujimoto, D. Meger, and D. Precup, “Off-policy deep reinforcement learning without exploration,” in International conference on machine learning.   PMLR, 2019, pp. 2052–2062.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com