Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Novice to Skilled: RL-based Shared Autonomy Communicating with Pilots in UAV Multi-Task Missions (2306.09600v2)

Published 16 Jun 2023 in cs.RO

Abstract: Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection and landing tasks are challenging for novice pilots due to the difficulties associated with depth perception and the control interface. We propose a shared autonomy system, alongside supplementary information displays, to assist pilots to successfully complete multi-task missions without any pilot training. Our approach comprises of three modules: (1) a perception module that encodes visual information onto a latent representation, (2) a policy module that augments pilot's actions, and (3) an information augmentation module that provides additional information to the pilot. The policy module is trained in simulation with simulated users and transferred to the real world without modification in a user study (n=29), alongside alternative supplementary information schemes including learnt red/green light feedback cues and an augmented reality display. The pilot's intent is unknown to the policy module and is inferred from the pilot's input and UAV's states. The assistant increased task success rate for the landing and inspection tasks from [16.67% & 54.29%] respectively to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar performance to experienced pilots. Red/green light feedback cues reduced the required time by 19.53% and trajectory length by 17.86% for the inspection task, where participants rated it as their preferred condition due to the intuitive interface and providing reassurance. This work demonstrates that simple user models can train shared autonomy systems in simulation, and transfer to physical tasks to estimate user intent and provide effective assistance and information to the pilot.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. D. C. Schedl, I. Kurmi, and O. Bimber, “Search and rescue with airborne optical sectioning,” Nature Machine Intelligence, vol. 2, pp. 783–790, 2020.
  2. M. JN, G. CJ, N. BM, and H. AP, “Using an unmanned aircraft system (drone) to conduct a complex high altitude search and rescue operation: A case study,” Wilderness & environmental medicine, vol. 30, pp. 287–290, 2019.
  3. A. Albanese, V. Sciancalepore, and X. Costa-Pérez, “Sardo: An automated search-and-rescue drone-based solution for victims localization,” IEEE Transactions on Mobile Computing, vol. 21, no. 9, pp. 3312–3325, 2022.
  4. J. Nikolic, M. Burri, J. Rehder, S. Leutenegger, C. Huerzeler, and R. Siegwart, “A uav system for inspection of industrial facilities,” in 2013 IEEE Aerospace Conference, 2013, pp. 1–8.
  5. I.-H. Kim, S. Yoon, J. H. Lee, S. Jung, S. Cho, and H.-J. Jung, “A comparative study of bridge inspection and condition assessment between manpower and a uas,” Drones, vol. 6, no. 11, 2022.
  6. L. Morando, C. T. Recchiuto, J. Calla, P. Scuteri, and A. Sgorbissa, “Thermal and visual tracking of photovoltaic plants for autonomous uav inspection,” Drones, vol. 6, no. 11, 2022.
  7. N. Smolyanskiy and M. Gonzalez-Franco, “Stereoscopic first person view system for drone navigation,” Frontiers in Robotics and AI, vol. 4, p. 11, 2017.
  8. K. Backman, D. Kulic, and H. Chung, “Reinforcement learning for shared autonomy drone landings,” CoRR, vol. abs/2202.02927, 2022.
  9. C. Wong, E. Yang, X.-T. Yan, and D. Gu, “Autonomous robots for harsh environments: a holistic overview of current solutions and ongoing challenges,” Systems Science & Control Engineering, vol. 6, no. 1, pp. 213–219, 2018.
  10. P. Xia, F. Xu, T. Zhou, and J. Du, “Benchmarking human versus robot performance in emergency structural inspection,” Journal of Construction Engineering and Management, vol. 148, no. 8, p. 04022070, 2022.
  11. F. Perez-Grau, R. Ragel, F. Caballero, A. Viguria, and A. Ollero, “Semi-autonomous teleoperation of uavs in search and rescue scenarios,” in 2017 International Conference on Unmanned Aircraft Systems (ICUAS), 2017, pp. 1066–1074.
  12. S. Li and X. Zhang, “Implicit intention communication in human–robot interaction through visual behavior studies,” IEEE Transactions on Human-Machine Systems, vol. 47, no. 4, pp. 437–448, 2017.
  13. S. Nikolaidis, D. Hsu, and S. Srinivasa, “Human-robot mutual adaptation in collaborative tasks: Models and experiments,” The International Journal of Robotics Research, vol. 36, pp. 618 – 634, 2017.
  14. V. Alonso and P. de la Puente, “System transparency in shared autonomy: A mini review.” Frontiers in neurorobotics, vol. 12, p. 83, 2018.
  15. R. Polvara, M. Patacchiola, M. Hanheide, and G. Neumann, “Sim-to-real quadrotor landing via sequential deep q-networks and domain randomization,” Robotics, vol. 9, no. 1, 2020.
  16. L. Bartolomei, Y. Kompis, L. Teixeira, and M. Chli, “Autonomous emergency landing for multicopters using deep reinforcement learning,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 3392–3399.
  17. A. Rodriguez-Ramos, C. Sampedro, H. Bavle, P. D. L. Puente, and P. Campoy, “A deep reinforcement learning strategy for uav autonomous landing on a moving platform,” Journal of Intelligent & Robotic Systems, vol. 93, pp. 351–366, 2019.
  18. J. Xie, X. Peng, H. Wang, W. Niu, and X. Zheng, “Uav autonomous tracking and landing based on deep reinforcement learning strategy,” Sensors, vol. 20, no. 19, 2020.
  19. Y. Feng, K. Tse, S. Chen, C.-Y. Wen, and B. Li, “Learning-based autonomous uav system for electrical and mechanical & device inspection,” Sensors, vol. 21, no. 4, 2021.
  20. Y. Sun and O. Ma, “Automating aircraft scanning for inspection or 3d model creation with a uav and optimal path planning,” Drones, vol. 6, no. 4, 2022.
  21. A. Neagu, H. Wang, H. Zhou, H. Liu, Z. Huang, and M. Feng, “Research on determining the inspection point of multirotor uav power tower,” Mathematical Problems in Engineering, vol. 2021, p. 8894055, 2021.
  22. I. Sa, S. Hrabar, and P. Corke, “Inspection of pole-like structures using a visual-inertial aided vtol platform with shared autonomy,” Sensors, vol. 15, pp. 22 003–22 048, 2015.
  23. S. Reddy, A. Dragan, and S. Levine, “Shared autonomy via deep reinforcement learning,” in Proceedings of Robotics: Science and Systems, 2018.
  24. D. Zhang, R. Tron, and R. P. Khurshid, “Haptic feedback improves human-robot agreement and user satisfaction in shared-autonomy teleoperation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 3306–3312.
  25. K. Backman, D. Kulić, and H. Chung, “Learning to assist drone landings,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3192–3199, 2021.
  26. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” International Conference on Learning Representations, 2016.
  27. S. Fujimoto, H. van Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in Proceedings of the 35th Int. Conference on Machine Learning, vol. 80, 2018, pp. 1587–1596.
  28. W. Curran, R. Pocius, and W. D. Smart, “Neural networks for incremental dimensionality reduced reinforcement learning,” in 2017 IEEE/RSJ IROS, 2017, pp. 1559–1565.
  29. S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and Service Robotics.   Springer International Publishing, 2018, pp. 621–635.
  30. S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” Human mental workload, vol. 52, pp. 139–183, 1988.
  31. K. Konstantoudakis, K. Christaki, D. Tsiakmakis, D. Sainidis, G. Albanis, A. Dimou, and P. Daras, “Drone control in ar: An intuitive system for single-handed gesture control, drone tracking, and contextualized camera feed visualization in augmented reality,” Drones, vol. 6, no. 2, 2022.
  32. A. Loquercio, A. I. Maqueda, C. R. del-Blanco, and D. Scaramuzza, “Dronet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, 2018.

Summary

We haven't generated a summary for this paper yet.