Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 103 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 27 tok/s
GPT-5 High 37 tok/s Pro
GPT-4o 92 tok/s
GPT OSS 120B 467 tok/s Pro
Kimi K2 241 tok/s Pro
2000 character limit reached

A Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs (2402.04418v1)

Published 6 Feb 2024 in cs.RO, cs.SY, and eess.SY

Abstract: Multirotor UAVs are used for a wide spectrum of civilian and public domain applications. Navigation controllers endowed with different attributes and onboard sensor suites enable multirotor autonomous or semi-autonomous, safe flight, operation, and functionality under nominal and detrimental conditions and external disturbances, even when flying in uncertain and dynamically changing environments. During the last decade, given the faster-than-exponential increase of available computational power, different learning-based algorithms have been derived, implemented, and tested to navigate and control, among other systems, multirotor UAVs. Learning algorithms have been, and are used to derive data-driven based models, to identify parameters, to track objects, to develop navigation controllers, and to learn the environment in which multirotors operate. Learning algorithms combined with model-based control techniques have been proven beneficial when applied to multirotors. This survey summarizes published research since 2015, dividing algorithms, techniques, and methodologies into offline and online learning categories, and then, further classifying them into machine learning, deep learning, and reinforcement learning sub-categories. An integral part and focus of this survey are on online learning algorithms as applied to multirotors with the aim to register the type of learning techniques that are either hard or almost hard real-time implementable, as well as to understand what information is learned, why, and how, and how fast. The outcome of the survey offers a clear understanding of the recent state-of-the-art and of the type and kind of learning-based algorithms that may be implemented, tested, and executed in real-time.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (98)
  1. Towards autonomous detection and tracking of electric towers for aerial power line inspection. In 2014 international conference on unmanned aircraft systems (ICUAS), pages 284–295. IEEE, 2014.
  2. A review of uav monitoring in mining areas: Current status and future perspectives. International Journal of Coal Science & Technology, 6:320–333, 2019.
  3. Towards an autonomous vision-based unmanned aerial system against wildlife poachers. Sensors, 15(12):31362–31391, 2015.
  4. A virtualized border control system based on uavs: Design and energy efficiency considerations. In 2019 IEEE aerospace conference, pages 1–11. IEEE, 2019.
  5. Ubristes: Uav-based building rehabilitation with visible and thermal infrared remote sensing. In Robot 2015: Second Iberian Robotics Conference: Advances in Robotics, Volume 1, pages 245–256. Springer, 2016.
  6. Real-time uav weed scout for selective weed control by adaptive robust control and machine learning algorithm. In 2016 ASABE Annual International Meeting, page 1. American Society of Agricultural and Biological Engineers, 2016.
  7. A review of deep learning methods and applications for unmanned aerial vehicles. Journal of Sensors, 2017, 2017.
  8. Survey of model-based reinforcement learning: Applications on robotics. Journal of Intelligent & Robotic Systems, 86(2):153–173, 2017.
  9. Unmanned aerial vehicles using machine learning for autonomous flight; state-of-the-art. Advanced Robotics, 33(6):265–277, 2019.
  10. Drone deep reinforcement learning: A review. Electronics, 10(9):999, 2021.
  11. Safe learning in robotics: From learning-based control to safe reinforcement learning. Annual Review of Control, Robotics, and Autonomous Systems, 5:411–444, 2022.
  12. Reinforcement learning: An introduction. MIT press, 2018.
  13. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
  14. Any object tracking and following by a flying drone. In 2015 Fourteenth Mexican International Conference on Artificial Intelligence (MICAI), pages 35–41. IEEE, 2015.
  15. Deep learning based neural network controller for quad copter: Application to hovering mode. In 2019 International Conference on Electrical and Computing Technologies and Applications (ICECTA), pages 1–5. IEEE, 2019.
  16. Monocular vision based autonomous landing of quadrotor through deep reinforcement learning. In 2018 37th Chinese Control Conference (CCC), pages 10014–10019. IEEE, 2018.
  17. A deep reinforcement learning strategy for uav autonomous landing on a moving platform. Journal of Intelligent & Robotic Systems, 93(1):351–366, 2019.
  18. Hybrid reinforcement learning control for a micro quadrotor flight. IEEE Control Systems Letters, 5(2):505–510, 2020.
  19. Online learning: A comprehensive survey. Neurocomputing, 459:249–289, 2021.
  20. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 1(2):661–667, 2015.
  21. Deep drone racing: Learning agile flight in dynamic environments. In Conference on Robot Learning, pages 133–145. PMLR, 2018.
  22. Deep neural network for precision landing and variable flight planning of autonomous uav. In 2021 Photonics & Electromagnetics Research Symposium (PIERS), pages 2243–2247. IEEE, 2021.
  23. Modified neural network method for stabilizing multi-rotor unmanned aerial vehicles. 2023.
  24. Deep neural network for real-time autonomous indoor navigation. arXiv preprint arXiv:1511.04668, 2015.
  25. Deep neural networks for improved, impromptu trajectory tracking of quadrotors. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 5183–5189. IEEE, 2017.
  26. Toward low-flying autonomous mav trail navigation using deep neural networks for environmental awareness. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4241–4247. IEEE, 2017.
  27. Perception, guidance, and navigation for indoor autonomous drone racing using deep learning. IEEE Robotics and Automation Letters, 3(3):2539–2544, 2018.
  28. Dronet: Learning to fly by driving. IEEE Robotics and Automation Letters, 3(2):1088–1095, 2018.
  29. Learning vision-based quadrotor control in user proximity. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 369–369. IEEE, 2019.
  30. Intelligent position controller for unmanned aerial vehicles (uav) based on supervised deep learning. Machines, 11(6):606, 2023.
  31. Autonomous navigation of uav by using real-time model-based reinforcement learning. In 2016 14th international conference on control, automation, robotics and vision (ICARCV), pages 1–6. IEEE, 2016.
  32. Autonomous quadrotor landing using deep reinforcement learning. arXiv preprint arXiv:1709.03339, 2017.
  33. Inverse reinforcement learning control for trajectory tracking of a multirotor uav. International Journal of Control, Automation and Systems, 15(4):1826–1834, 2017.
  34. Plato: Policy learning using adaptive trajectory optimization. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 3342–3349. IEEE, 2017.
  35. Control of a quadrotor with reinforcement learning. IEEE Robotics and Automation Letters, 2(4):2096–2103, 2017.
  36. Vision-based autonomous landing of a multi-copter unmanned aerial vehicle using reinforcement learning. In 2018 International Conference on Unmanned Aircraft Systems (ICUAS), pages 108–114. IEEE, 2018.
  37. A reinforcement learning approach for autonomous control and landing of a quadrotor. In 2018 International Conference on Unmanned Aircraft Systems (ICUAS), pages 676–683. IEEE, 2018.
  38. Self-training by reinforcement learning for full-autonomous drones of the future. In 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC), pages 1–10. IEEE, 2018.
  39. Reinforcement learning for autonomous uav navigation using function approximation. In 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages 1–6. IEEE, 2018.
  40. Attitude synchronization for multiple quadrotors using reinforcement learning. In 2019 Chinese Control Conference (CCC), pages 2480–2483. IEEE, 2019.
  41. Low-level control of a quadrotor with deep model-based reinforcement learning. IEEE Robotics and Automation Letters, 4(4):4224–4230, 2019.
  42. Deep reinforcement learning-based continuous control for multicopter systems. In 2019 6th International Conference on Control, Decision and Information Technologies (CoDIT), pages 1876–1881. IEEE, 2019.
  43. Least square policy iteration for ibvs based dynamic target tracking. In 2019 International Conference on Unmanned Aircraft Systems (ICUAS), pages 1089–1098. IEEE, 2019.
  44. Uav autonomous target search based on deep reinforcement learning in complex disaster scene. IEEE Access, 7:117227–117245, 2019.
  45. Autonomous navigation of uavs in large-scale complex environments: A deep reinforcement learning approach. IEEE Transactions on Vehicular Technology, 68(3):2124–2136, 2019.
  46. Path design for cellular-connected uav with reinforcement learning. In 2019 IEEE Global Communications Conference (GLOBECOM), pages 1–6. IEEE, 2019.
  47. A deep reinforcement learning approach for path following on a quadrotor. In 2020 European Control Conference (ECC), pages 1092–1098. IEEE, 2020.
  48. Low-level autonomous control and tracking of quadrotor using reinforcement learning. Control Engineering Practice, 95:104222, 2020.
  49. Robust formation control for cooperative underactuated quadrotors via reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 32(10):4577–4587, 2020.
  50. Reinforcement learning for uav autonomous navigation, mapping and target detection. In 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), pages 1004–1013. IEEE, 2020.
  51. Uav maneuvering target tracking in uncertain environments based on deep reinforcement learning and meta-learning. Remote Sensing, 12(22):3789, 2020.
  52. Uav aided search and rescue operation using reinforcement learning. In 2020 SoutheastCon, volume 2, pages 1–8. IEEE, 2020.
  53. Proximal policy optimization with an integral compensator for quadrotor control. Frontiers of Information Technology & Electronic Engineering, 21(5):777–795, 2020.
  54. Inclined quadrotor landing using deep reinforcement learning. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2361–2368. IEEE, 2021.
  55. Deep reinforcement learning for quadrotor path following with adaptive velocity. Autonomous Robots, 45(1):119–134, 2021.
  56. Fault tolerant control combining reinforcement learning and model-based control. In 2021 5th International Conference on Control and Fault-Tolerant Systems (SysTol), pages 31–36. IEEE, 2021.
  57. Trajectory planning of load transportation with multi-quadrotors based on reinforcement learning algorithm. Aerospace Science and Technology, 116:106887, 2021.
  58. Z Jiang and G Song. A deep reinforcement learning strategy for uav autonomous landing on a platform. arXiv preprint arXiv:2209.02954, 2022.
  59. An adaptive multi-level quantization-based reinforcement learning model for enhancing uav landing on moving targets. Sustainability, 14(14):8825, 2022.
  60. A deep reinforcement learning motion control strategy of a multi-rotor uav for payload transportation with minimum swing. In 2022 30th Mediterranean Conference on Control and Automation (MED), pages 368–374. IEEE, 2022.
  61. Multi-uav navigation for partially observable communication coverage by graph reinforcement learning. IEEE Transactions on Mobile Computing, 2022.
  62. Consciousness-driven reinforcement learning: An online learning control framework. International Journal of Intelligent Systems, 37(1):770–798, 2022.
  63. Structured online learning for low-level control of quadrotors. In 2022 American Control Conference (ACC), pages 1242–1247. IEEE, 2022.
  64. Reinforcement learning control for moving target landing of vtol uavs with motion constraints. IEEE Transactions on Industrial Electronics, 2023.
  65. Deep reinforcement learning of uav tracking control under wind disturbances environments. IEEE Transactions on Instrumentation and Measurement, 2023.
  66. Adaptive path planning for fusing rapidly exploring random trees and deep reinforcement learning in an agriculture dynamic environment uavs. Agriculture, 13(2):354, 2023.
  67. A deep reinforcement learning visual servoing control strategy for target tracking using a multirotor uav. In 2023 9th International Conference on Automation, Robotics and Applications (ICARA), pages 219–224. IEEE, 2023.
  68. Reinforcement learning framework for uav-based target localization applications. Internet of Things, 23:100867, 2023.
  69. Tracking natural trails with swarm-based visual saliency. Journal of Field Robotics, 30(1):64–86, 2013.
  70. Vision-based control of a quadrotor in user proximity: Mediated vs end-to-end learning approaches. In 2019 International Conference on Robotics and Automation (ICRA), pages 6489–6495. IEEE, 2019.
  71. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  72. Caffe | model zoo. http://caffe.berkeleyvision.org/model_zoo.html. (Accessed on 10/11/2022).
  73. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
  74. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  75. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
  76. Lstm-characterized deep reinforcement learning for continuous flight control and resource allocation in uav-assisted sensor network. IEEE Internet of Things Journal, 9(6):4179–4189, 2021.
  77. Enes Bilgin. Mastering reinforcement learning with python: build next-generation, self-learning models using reinforcement learning techniques and best practices. Packt Publishing Ltd, 2020.
  78. Maxim Lapan. Deep Reinforcement Learning Hands-On: Apply modern RL methods, with deep Q-networks, value iteration, policy gradients, TRPO, AlphaGo Zero and more. Packt Publishing Ltd, 2018.
  79. Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization. Robotics, 9(1):8, 2020.
  80. Structured online learning-based control of continuous-time nonlinear systems. IFAC-PapersOnLine, 53(2):8142–8149, 2020.
  81. Survey on computer vision for uavs: Current developments and trends. Journal of Intelligent & Robotic Systems, 87:141–168, 2017.
  82. Aircraft control and simulation: dynamics, controls design, and autonomous systems. John Wiley & Sons, 2015.
  83. Omnidrones: An efficient and flexible platform for reinforcement learning in drone control. arXiv preprint arXiv:2309.12825, 2023.
  84. Dipti Srinivasan. Innovations in Multi-Agent Systems and Application–1, volume 310. Springer, 2010.
  85. Reinforcement learning and markov decision processes. In Reinforcement learning: State-of-the-art, pages 3–42. Springer, 2012.
  86. Leader–follower output synchronization of linear heterogeneous systems with active leader using reinforcement learning. IEEE transactions on neural networks and learning systems, 29(6):2139–2153, 2018.
  87. Distributed adaptive control for synchronization of unknown nonlinear networked systems. Automatica, 46(12):2014–2021, 2010.
  88. Neural control and online learning for speed adaptation of unmanned aerial vehicles. Frontiers in neural circuits, 16, 2022.
  89. Evolution algorithm and online learning for racing drone. In NeurIPS 2019 Competition and Demonstration Track, pages 100–109. PMLR, 2020.
  90. Online deep learning for improved trajectory tracking of unmanned aerial vehicles using expert knowledge. In 2019 International Conference on Robotics and Automation (ICRA), pages 7727–7733. IEEE, 2019.
  91. Pretraining neural-networks with neural-fly for rapid online learning. In ICRA2023 Workshop on Pretraining for Robotics (PT4R), 2023.
  92. Minimum snap trajectory generation and control for quadrotors. In 2011 IEEE international conference on robotics and automation, pages 2520–2525. IEEE, 2011.
  93. Research on route tracking controller of quadrotor uav based on fuzzy logic and rbf neural network. IEEE Access, 2023.
  94. Realtime brain-inspired adaptive learning control for nonlinear systems with configuration uncertainties. IEEE Transactions on Automation Science and Engineering, 2023.
  95. Remote uav online path planning via neural network-based opportunistic control. IEEE Wireless Communications Letters, 9(6):861–865, 2020.
  96. Safe learning of quadrotor dynamics using barrier certificates. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 2460–2465. IEEE, 2018.
  97. State-aware rate adaptation for uavs by incorporating on-board sensors. IEEE Transactions on Vehicular Technology, 69(1):488–496, 2019.
  98. Deterministic policy gradient with integral compensator for robust quadrotor control. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10):3713–3725, 2019.
Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run paper prompts using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.