Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-organized free-flight arrival for urban air mobility (2404.03710v2)

Published 4 Apr 2024 in cs.LG and cs.AI

Abstract: Urban air mobility is an innovative mode of transportation in which electric vertical takeoff and landing (eVTOL) vehicles operate between nodes called vertiports. We outline a self-organized vertiport arrival system based on deep reinforcement learning. The airspace around the vertiport is assumed to be circular, and the vehicles can freely operate inside. Each aircraft is considered an individual agent and follows a shared policy, resulting in decentralized actions that are based on local information. We investigate the development of the reinforcement learning policy during training and illustrate how the algorithm moves from suboptimal local holding patterns to a safe and efficient final policy. The latter is validated in simulation-based scenarios, including robustness analyses against sensor noise and a changing distribution of inbound traffic. Lastly, we deploy the final policy on small-scale unmanned aerial vehicles to showcase its real-world usability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (83)
  1. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34, 26–38.
  2. Designing airspace for urban air mobility: A review of concepts and approaches. Progress in Aerospace Sciences 125, 100726.
  3. The theory of dynamic programming. Bulletin of the American Mathematical Society 60, 503–515.
  4. Autonomous COLREGS modes and velocity functions. Technical Report. Massachusetts Institute of Technology, Cambridge.
  5. Distributed computational guidance for high-density urban air mobility with cooperative and non-cooperative collision avoidance, in: AIAA Scitech 2020 Forum, p. 1371.
  6. An efficient algorithm for self-organized terminal arrival in urban air mobility, in: AIAA Scitech 2020 Forum, p. 0660.
  7. Scalable autonomous separation assurance with heterogeneous multi-agent reinforcement learning. IEEE Transactions on Automation Science and Engineering 19, 2837–2848.
  8. Vehicle design and optimization model for urban air mobility. Journal of Aircraft 57, 1003–1013.
  9. Zipfian environments for reinforcement learning, in: Conference on Lifelong Learning Agents, PMLR. pp. 406–429.
  10. Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning, in: International Conference on Robotics and Automation, IEEE. pp. 285–292.
  11. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47–53.
  12. Dense reinforcement learning for safety validation of autonomous vehicles. Nature 615, 620–627.
  13. Learning to communicate with deep multi-agent reinforcement learning. Advances in Neural Information Processing Systems 29.
  14. Addressing function approximation error in actor-critic methods, in: International Conference on Machine Learning, pp. 1587–1596.
  15. Urban air mobility: A comprehensive review and comparative analysis with autonomous and electric ground transportation for informing future research. Transportation Research Part C: Emerging Technologies 132, 103377.
  16. Crazyflie 2.0 quadrotor as a platform for research and education in robotics and control engineering, in: International Conference on Methods and Models in Automation and Robotics, IEEE. pp. 37–42.
  17. Analysis of the impact of traffic density on training of reinforcement learning based conflict resolution methods for drones. Engineering Applications of Artificial Intelligence 133, 108066.
  18. Cooperative multi-agent control using deep reinforcement learning, in: International Conference on Autonomous Agents and Multi-Agent Systems, Springer. pp. 66–83.
  19. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor, in: International Conference on Machine Learning, pp. 1861–1870.
  20. Enhanced method for reinforcement learning based dynamic obstacle avoidance by assessment of collision risk. Neurocomputing 568, 127097.
  21. Towards robust car-following based on deep reinforcement learning. Transportation Research Part C: Emerging Technologies 159, 104486.
  22. Long short-term memory. Neural computation 9, 1735–1780.
  23. Bluesky ATC simulator project: an open data and open source approach, in: International Conference on Research in Air Transportation, p. 132.
  24. Designing for safety: the ‘free flight’ air traffic management concept. Reliability Engineering & System Safety 75, 215–232.
  25. Deep reinforcement learning for optimizing finance portfolio management, in: Amity International Conference on Artificial Intelligence, IEEE. pp. 14–20.
  26. Strategic conflict management using recurrent multi-agent reinforcement learning for urban air mobility operations considering uncertainties. Journal of Intelligent & Robotic Systems 107, 20.
  27. IMRCLab, 2024. Crazyswarm2: A ROS 2 testbed for Aerial Robot Teams. https://imrclab.github.io/crazyswarm2/.
  28. Learning-to-fly rl: Reinforcement learning-based collision avoidance for scalable urban air mobility, in: Digital Avionics Systems Conference, IEEE. pp. 1–10.
  29. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics 42, 598–608.
  30. Rolling-horizon electric vertical takeoff and landing arrival scheduling for on-demand urban air mobility. Journal of Aerospace Information Systems 17, 150–159.
  31. eVTOL arrival sequencing and scheduling for on-demand urban air mobility, in: Digital Avionics Systems Conference, IEEE. pp. 1–7.
  32. Rapidly-exploring random trees: A new tool for path planning. Research Report 9811.
  33. Deep learning. Nature 521, 436–444.
  34. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning 8, 293–321.
  35. A tutorial on partially observable markov decision processes. Journal of Mathematical Psychology 53, 119–125.
  36. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in Neural Information Processing Systems 30.
  37. Faster sorting algorithms discovered using deep reinforcement learning. Nature 618, 257–263.
  38. A top-down methodology for global urban air mobility demand estimation, in: AIAA Aviation Forum, p. 3255.
  39. Minimum snap trajectory generation and control for quadrotors, in: 2011 IEEE International Conference on Robotics and Automation, IEEE. pp. 2520–2525.
  40. Memory-based deep reinforcement learning for POMDPS, in: International Conference on Intelligent Robots and Systems, IEEE. pp. 5619–5626.
  41. Sim-to-(multi)-real: Transfer of low-level robust control policies to multiple quadrotors, in: International Conference on Intelligent Robots and Systems, IEEE. pp. 59–66.
  42. Enabling airspace integration for high-density on-demand mobility operations, in: Aviation Technology, Integration, and Operations Conference, p. 3086.
  43. Assessing transferability from simulation to reality for reinforcement learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 1172–1183.
  44. Curriculum learning for reinforcement learning domains: A framework and survey. The Journal of Machine Learning Research 21, 7382–7431.
  45. Probabilistic verification of a decentralized policy for conflict resolution in multi-agent systems, in: International Conference on Robotics and Automation, IEEE. pp. 2448–2453.
  46. Multi-agent reinforcement learning for cooperative air transportation services in city-wide autonomous urban air mobility. IEEE Transactions on Intelligent Vehicles .
  47. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32, 8026–8037.
  48. A review of current technology and research in urban on-demand air mobility applications, in: 8th biennial autonomous VTOL technical meeting and 6th annual electric VTOL symposium, pp. 333–343.
  49. Energy-efficient arrival with rta constraint for multirotor evtol in urban air mobility. Journal of Aerospace Information Systems 16, 263–277.
  50. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.
  51. Air taxi service for urban mobility: A critical review of recent developments, future challenges, and opportunities. Transportation Research Part E: Logistics and Transportation Review 143, 102090.
  52. Using reinforcement learning to improve airspace structuring in an urban environment. Aerospace 9, 420.
  53. Learning-to-fly: Learning-based collision avoidance for scalable urban air mobility, in: International Conference on Intelligent Transportation Systems, IEEE. pp. 1–8.
  54. VTOL urban air mobility concept vehicles for technology development, in: Aviation Technology, Integration, and Operations Conference, p. 3847.
  55. Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489.
  56. Deterministic policy gradient algorithms, in: International Conference on Machine Learning, pp. 387–395.
  57. Density estimation for statistics and data analysis. Routledge.
  58. Development of optimal scheduling strategy and approach control model of multicopter vtol aircraft for urban air mobility (UAM) operation. Transportation Research Part C: Emerging Technologies 128, 103181.
  59. An overview of current research and developments in urban air mobility–setting the scene for UAM introduction. Journal of Air Transport Management 87, 101852.
  60. Learning multiagent communication with backpropagation. Advances in Neural Information Processing Systems 29.
  61. Reinforcement Learning: An Introduction. Cambridge: The MIT Press.
  62. Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems 12.
  63. Multiagent cooperation and competition with deep reinforcement learning. PloS ONE 12, e0172395.
  64. Multi-agent reinforcement learning: Independent vs. cooperative agents, in: International Conference on Machine Learning, pp. 330–337.
  65. Temporal difference learning and TD-gammon. Communications of the ACM 38, 58–68.
  66. Urban air mobility airspace integration concepts and considerations, in: Aviation Technology, Integration, and Operations Conference, p. 3676.
  67. Uber Elevate, 2016. Fast-Forwarding to a Future of On-Demand Urban Air Transportation. Technical Report.
  68. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.
  69. Spatial–temporal recurrent reinforcement learning for autonomous ships. Neural Networks 165, 634–653.
  70. RL Dresden Algorithm Suite. https://github.com/MarWaltz/TUD_RL.
  71. 2-level reinforcement learning for ships on inland waterways. arXiv preprint arXiv:2307.16769 .
  72. Distributed reinforcement learning for robot teams: a review. Current Robotics Reports 3, 239–257.
  73. Review of deep reinforcement learning approaches for conflict resolution in air traffic control. Aerospace 9, 294.
  74. A genetic algorithm tutorial. Statistics and Computing 4, 65–85.
  75. Risk-bounded and fairness-aware path planning for urban air mobility operations under uncertainty. Aerospace Science and Technology 127, 107738.
  76. Safety assured online guidance with airborne separation for urban air mobility operations in uncertain environments. IEEE Transactions on Intelligent Transportation Systems 23, 19413–19427.
  77. Integrated network design and demand forecast for on-demand urban air mobility. Engineering 7, 473–487.
  78. Scalable multi-agent computational guidance with separation assurance for autonomous urban air mobility. Journal of Guidance, Control, and Dynamics 43, 1473–1486.
  79. Autonomous free flight operations in urban air mobility with computational guidance and collision avoidance. IEEE Transactions on Intelligent Transportation Systems 22, 5962–5975.
  80. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Handbook of Reinforcement Learning and Control , 321–384.
  81. Physics informed deep reinforcement learning for aircraft conflict resolution. IEEE Transactions on Intelligent Transportation Systems 23, 8288–8301.
  82. Sim-to-real transfer in deep reinforcement learning for robotics: a survey, in: 2020 IEEE symposium series on computational intelligence (SSCI), IEEE. pp. 737–744.
  83. Optimization of molecules via deep reinforcement learning. Scientific reports 9, 10752.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com