Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Trip Planning for Autonomous Vehicles with Wireless Data Transfer Needs Using Reinforcement Learning (2309.12534v1)

Published 21 Sep 2023 in cs.LG, cs.SY, and eess.SY

Abstract: With recent advancements in the field of communications and the Internet of Things, vehicles are becoming more aware of their environment and are evolving towards full autonomy. Vehicular communication opens up the possibility for vehicle-to-infrastructure interaction, where vehicles could share information with components such as cameras, traffic lights, and signage that support a countrys road system. As a result, vehicles are becoming more than just a means of transportation; they are collecting, processing, and transmitting massive amounts of data used to make driving safer and more convenient. With 5G cellular networks and beyond, there is going to be more data bandwidth available on our roads, but it may be heterogeneous because of limitations like line of sight, infrastructure, and heterogeneous traffic on the road. This paper addresses the problem of route planning for autonomous vehicles in urban areas accounting for both driving time and data transfer needs. We propose a novel reinforcement learning solution that prioritizes high bandwidth roads to meet a vehicles data transfer requirement, while also minimizing driving time. We compare this approach to traffic-unaware and bandwidth-unaware baselines to show how much better it performs under heterogeneous traffic. This solution could be used as a starting point to understand what good policies look like, which could potentially yield faster, more efficient heuristics in the future.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Internet of vehicles: From intelligent grid to autonomous cars and vehicular fogs. International Journal of Distributed Sensor Networks, 12, 09 2016.
  2. Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies. IEEE Wireless Communications, 22(6):122–128, 2015.
  3. Route planning considerations for autonomous vehicles. IEEE Communications Magazine, 56(10):78–84, 2018.
  4. Machine learning for vehicular networks: Recent advances and application examples. IEEE Vehicular Technology Magazine, 13(2):94–101, 2018.
  5. Hannah et al Bast. Route Planning in Transportation Networks, pages 19–80. Springer International Publishing, Cham, 2016.
  6. A v2v-based method for the detection of road traffic congestion. IET Intelligent Transport Systems, 01 2019.
  7. Szilárd Aradi. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 23(2):740–759, 2022.
  8. Smoothed a* algorithm for practical unmanned surface vehicle path planning. Applied Ocean Research, 83:9–20, 02 2019.
  9. Sc-m*: A multi-agent path planning algorithm with soft-collision constraint on allocation of common resources. Applied Sciences, 9:4037, 09 2019.
  10. Mobile Robot Path Planning Based on Optimized Fuzzy Logic Controllers, pages 255–283. 01 2019.
  11. Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm. Expert Systems with Applications, 115, 08 2018.
  12. A multi-scale map method based on bioinspired neural network algorithm for robot path planning. IEEE Access, 7:142682–142691, 2019.
  13. Application of machine learning in wireless networks: Key techniques and open issues. IEEE Communications Surveys Tutorials, 21(4):3072–3108, 2019.
  14. Exploring applications of deep reinforcement learning for real-world autonomous driving systems. 01 2019.
  15. Yuxi Li. Deep reinforcement learning: An overview. CoRR, abs/1701.07274, 2017.
  16. Autonomous highway driving using deep reinforcement learning. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pages 2326–2331, 2019.
  17. Extended variable speed limit control using multi-agent reinforcement learning. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pages 1–8, 2020.
  18. Deep reinforcement learning-based adaptive computation offloading for mec in heterogeneous vehicular networks. IEEE Transactions on Vehicular Technology, 69(7):7916–7929, 2020.
  19. Planning and reasoning for autonomous vehicle control. International Journal of Intelligent Systems, 2(2):129–198, 1987.
  20. Real-time deep reinforcement learning based vehicle navigation. Applied Soft Computing, 96:106694, 2020.
  21. Route planning and power management for phevs with reinforcement learning. IEEE Transactions on Vehicular Technology, 69(5):4751–4762, 2020.
  22. Reinforcement learning in robot path optimization. J. Softw., 7(3):657–662, 2012.
  23. Multi-robot path planning method using reinforcement learning. Applied Sciences, 9(15), 2019.
  24. Path planning for uav-mounted mobile edge computing with deep reinforcement learning. IEEE Transactions on Vehicular Technology, 69(5):5723–5728, 2020.
  25. Novelgridworlds: A benchmark environment for detecting and adapting to novelties in open worlds. International Foundation for Autonomous Agents and Multiagent Systems, AAMAS, 2021.
  26. Uber movement mobility heatmap. https://movement.uber.com/explore/san_francisco/mobility-heatmap/, note = Accessed: 2021-10-30,.
  27. Openai gym. CoRR, abs/1606.01540, 2016.
  28. Openai baselines. https://github.com/openai/baselines, 2017.
  29. Stable baselines. https://github.com/hill-a/stable-baselines, 2018.
  30. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017.
  31. Playing atari with deep reinforcement learning. 2013.
  32. Implementing the deep q-network. 2017.
  33. Asynchronous methods for deep reinforcement learning. 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.