Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
12 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

N-MPC for Deep Neural Network-Based Collision Avoidance exploiting Depth Images (2402.13038v1)

Published 20 Feb 2024 in cs.RO

Abstract: This paper introduces a Nonlinear Model Predictive Control (N-MPC) framework exploiting a Deep Neural Network for processing onboard-captured depth images for collision avoidance in trajectory-tracking tasks with UAVs. The network is trained on simulated depth images to output a collision score for queried 3D points within the sensor field of view. Then, this network is translated into an algebraic symbolic equation and included in the N-MPC, explicitly constraining predicted positions to be collision-free throughout the receding horizon. The N-MPC achieves real time control of a UAV with a control frequency of 100Hz. The proposed framework is validated through statistical analysis of the collision classifier network, as well as Gazebo simulations and real experiments to assess the resulting capabilities of the N-MPC to effectively avoid collisions in cluttered environments. The associated code is released open-source along with the training images.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Y. Tian, K. Liu, K. Ok, L. Tran, D. Allen, N. Roy, and J. P. How, “Search and rescue under the forest canopy using multiple UAVs,” The Int. Journal of Robotics Research, vol. 39, no. 10-11, pp. 1201–1221, 2020.
  2. T. Dang, M. Tranzatto, S. Khattak, F. Mascarich, K. Alexis, and M. Hutter, “Graph-based subterranean exploration path planning using aerial and legged robots,” Journal of Field Robotics, vol. 37, no. 8, pp. 1363–1388, 2020.
  3. P. Petracek, V. Kratky, and M. Saska, “Dronument: System for reliable deployment of micro aerial vehicles in dark areas of large historical monuments,” IEEE Robotics and Automation Letters, vol. 5, pp. 2078–2085, 2020.
  4. P. Zhang, Y. Zhong, and X. Li, “SlimYOLOv3: Narrower, faster and better for real-time UAV applications,” in 2020 IEEE/CVF Int. Conf. on Computer Vision, 2019.
  5. Y. Akbari, N. Almaadeed, S. Al-maadeed, and O. Elharrouss, “Applications, databases and open computer vision research from drone videos and images: a survey,” Artificial Intelligence Review, vol. 54, no. 5, pp. 3887–3938, 2021.
  6. S. Khattak, H. Nguyen, F. Mascarich, T. Dang, and K. Alexis, “Complementary multi–modal sensor fusion for resilient robot pose estimation in subterranean environments,” in 2020 Int. Conf. on Unmanned Aircraft Systems, 2020, pp. 1024–1029.
  7. B. T. Lopez and J. P. How, “Aggressive 3-d collision avoidance for high-speed navigation.” in 2017 IEEE Int. Conf. on Robotics and Automation, 2017, pp. 5759–5765.
  8. A. Loquercio, A. I. Maqueda, C. R. Del-Blanco, and D. Scaramuzza, “Dronet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, 2018.
  9. A. Loquercio, E. Kaufmann, R. Ranftl, M. Müller, V. Koltun, and D. Scaramuzza, “Learning high-speed flight in the wild,” Science Robotics, vol. 6, no. 59, 2021.
  10. V. Tolani, S. Bansal, A. Faust, and C. Tomlin, “Visual navigation among humans with optimal control as a supervisor,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2288–2295, 2021.
  11. G. Kahn, P. Abbeel, and S. Levine, “BADGR: An autonomous self-supervised learning-based navigation system,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1312–1319, 2021.
  12. H. I. Ugurlu, X. H. Pham, and E. Kayacan, “Sim-to-real deep reinforcement learning for safe end-to-end planning of aerial robots,” Robotics, vol. 11, no. 5, p. 109, 2022.
  13. D. Hoeller, L. Wellhausen, F. Farshidian, and M. Hutter, “Learning a state representation and navigation in cluttered and dynamic environments,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5081–5088, 2021.
  14. H. Nguyen, S. H. Fyhn, P. De Petris, and K. Alexis, “Motion primitives-based navigation planning using deep collision prediction,” in 2022 IEEE Int. Conf. on Robotics and Automation, 2022, pp. 9660–9667.
  15. E. Kaufmann, L. Bauersfeld, A. Loquercio, M. Müller, V. Koltun, and D. Scaramuzza, “Champion-level drone racing using deep reinforcement learning,” Nature, vol. 620, no. 7976, pp. 982–987, 2023.
  16. D. C. Tan, F. Acero, R. McCarthy, D. Kanoulas, and Z. A. Li, “Value functions are control barrier functions: Verification of safe policies using control theory,” in 2nd Workshop on Formal Verification and Machine Learning, 2023. [Online]. Available: https://arxiv.org/abs/2306.04026
  17. C. Dawson, B. Lowenkamp, D. Goff, and C. Fan, “Learning safe, generalizable perception-based hybrid control with certificates,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1904–1911, 2022.
  18. G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou, “Information theoretic MPC for model-based reinforcement learning,” 2017, pp. 1714–1721.
  19. K. Y. Chee, T. Z. Jiahao, and M. A. Hsieh, “KNODE-MPC: A knowledge-based data-driven predictive control framework for aerial robots,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2819–2826, 2022.
  20. T. Salzmann, E. Kaufmann, J. Arrizabalaga, M. Pavone, D. Scaramuzza, and M. Ryll, “Real-time neural MPC: Deep learning model predictive control for quadrotors and agile robotic platforms,” IEEE Robotics and Automation Letters, vol. 8, no. 4, pp. 2397–2404, 2023.
  21. L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 4460–4470.
  22. M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” pp. 7537–7547, 2020.
  23. I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “β𝛽\betaitalic_β-VAE: Learning basic visual concepts with a constrained variational framework,” in 2017 Int. Conf. on Learning Representations, 2017.
  24. M. Kulkarni, T. J. Forgaard, and K. Alexis, “Aerial gym–isaac gym simulator for aerial robots,” in ”The Role of Robotics Simulators for Unmanned Aerial Vehicles” Workshop at 2023 IEEE Int. Conf. on Robotics and Automation, 2023. [Online]. Available: https://arxiv.org/abs/2305.16510
  25. M. Kulkarni and K. Alexis, “Task-driven compression for collision encoding based on depth images,” in 2023 Int. Symp. on Visual Computing, 2023.
  26. M. Kulkarni, H. Nguyen, and K. Alexis, “Semantically-enhanced deep collision prediction for autonomous navigation using aerial robots,” in 2023 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2023.
  27. J. Ku, A. Harakeh, and S. L. Waslander, “In defense of classical image processing: Fast depth completion on the CPU,” in 2018 15th Conf. on Computer and Robot Vision, 2018.
  28. R. Verschueren, G. Frison, D. Kouzoupis, J. Frey, N. van Duijkeren, A. Zanelli, B. Novoselnik, T. Albin, R. Quirynen, and M. Diehl, “acados – a modular open-source framework for fast embedded optimal control,” Mathematical Programming Computation, vol. 14, p. 147–183, 2021.
  29. J. A. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl, “CasADi: a software framework for nonlinear optimization and optimal control,” Mathematical Programming Computation, vol. 11, no. 1, pp. 1–36, 2019.
  30. A. Mallet, C. Pasteur, M. Herrb, S. Lemaignan, and F. Ingrand, “GenoM3: Building middleware-independent robotic components,” in 2010 IEEE Int. Conf. on Robotics and Automation, 2010, pp. 4627–4632.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets