Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MPCC++: Model Predictive Contouring Control for Time-Optimal Flight with Safety Constraints (2403.17551v2)

Published 26 Mar 2024 in cs.RO

Abstract: Quadrotor flight is an extremely challenging problem due to the limited control authority encountered at the limit of handling. Model Predictive Contouring Control (MPCC) has emerged as a promising model-based approach for time optimization problems such as drone racing. However, the standard MPCC formulation used in quadrotor racing introduces the notion of the gates directly in the cost function, creating a multi objective optimization that continuously trades off between maximizing progress and tracking the path accurately. This paper introduces three key components that enhance the state-of-the-art MPCC approach for drone racing. First and foremost, we provide safety guarantees in the form of a track constraint and terminal set. The track constraint is designed as a spatial constraint which prevents gate collisions while allowing for time optimization only in the cost function. Second, we augment the existing first principles dynamics with a residual term that captures complex aerodynamic effects and thrust forces learned directly from real-world data. Third, we use Trust Region Bayesian Optimization (TuRBO), a state-of-the-art global Bayesian Optimization algorithm, to tune the hyperparameters of the MPCC controller given a sparse reward based on lap time minimization. The proposed approach achieves similar lap times to the best-performing RL policy and outperforms the best model-based controller while satisfying constraints. In both simulation and real world, our approach consistently prevents gate crashes with 100% success rate, while pushing the quadrotor to its physical limits reaching speeds of more than 80km/h.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. G. Loianno and D. Scaramuzza, “Special issue on future challenges and opportunities in vision-based drone navigation,” 2020.
  2. S. Rajendran and S. Srinivas, “Air taxi service for urban mobility: a critical review of recent developments, future challenges, and opportunities,” Transportation Research Part E-logistics and Transportation Review, Nov 2020.
  3. H. Shakhatreh, A. H. Sawalmeh, A. Al-Fuqaha, Z. Dou, E. Almaita, I. Khalil, N. S. Othman, A. Khreishah, and M. Guizani, “Unmanned aerial vehicles (uavs): A survey on civil applications and key research challenges,” Ieee Access, vol. 7, pp. 48572–48634, 2019.
  4. J. Betz, H. Zheng, A. Liniger, U. Rosolia, P. Karle, M. Behl, V. Krovi, and R. Mangharam, “Autonomous vehicles on the edge: A survey on autonomous vehicle racing,” IEEE Open Journal of Intelligent Transportation Systems, vol. 3, pp. 458–488, 2022.
  5. D. Hanover, A. Loquercio, L. Bauersfeld, A. Romero, R. Penicka, Y. Song, G. Cioffi, E. Kaufmann, and D. Scaramuzza, “Autonomous drone racing: A survey,” arXiv e-prints, pp. arXiv–2301, 2023.
  6. E. Kaufmann, L. Bauersfeld, A. Loquercio, M. Müller, V. Koltun, and D. Scaramuzza, “Champion-level drone racing using deep reinforcement learning,” Nature, vol. 620, pp. 982–987, Aug 2023.
  7. Y. Song, A. Romero, M. Mueller, V. Koltun, and D. Scaramuzza, “Reaching the limit in autonomous racing: Optimal control versus reinforcement learning,” Science Robotics, p. adg1462, 2023.
  8. S. Gu, L. Yang, Y. Du, G. Chen, F. Walter, J. Wang, Y. Yang, and A. Knoll, “A review of safe reinforcement learning: Methods, theory and applications,” arXiv preprint arXiv:2205.10330, 2022.
  9. L. Brunke, M. Greeff, A. W. Hall, Z. Yuan, S. Zhou, J. Panerati, and A. P. Schoellig, “Safe learning in robotics: From learning-based control to safe reinforcement learning,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 5, pp. 411–444, 2022.
  10. K. P. Wabersich and M. N. Zeilinger, “A predictive safety filter for learning-based control of constrained nonlinear dynamical systems,” Automatica, vol. 129, p. 109597, 2021.
  11. H. Dai, B. Landry, L. Yang, M. Pavone, and R. Tedrake, “Lyapunov-stable neural-network control,” in Proceedings of Robotics: Science and Systems, July 2021.
  12. A. Romero, Y. Song, and D. Scaramuzza, “Actor-critic model predictive control,” in 2024 IEEE International Conference on Robotics and Automation (ICRA).
  13. P. Foehn, A. Romero, and D. Scaramuzza, “Time-optimal planning for quadrotor waypoint flight,” Science Robotics, vol. 6, no. 56, 2021.
  14. A. Romero, S. Sun, P. Foehn, and D. Scaramuzza, “Model predictive contouring control for time-optimal quadrotor flight,” IEEE Transactions on Robotics, pp. 1–17, 2022.
  15. A. Romero, R. Penicka, and D. Scaramuzza, “Time-optimal online replanning for agile quadrotor flight,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7730–7737, 2022.
  16. A. Romero, S. Govil, G. Yilmaz, Y. Song, and D. Scaramuzza, “Weighted maximum likelihood for controller tuning,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 1334–1341, IEEE, 2023.
  17. L. P. Fröhlich, C. Küttel, E. Arcari, L. Hewing, M. N. Zeilinger, and A. Carron, “Model learning and contextual controller tuning for autonomous racing,” arXiv preprint arXiv:2110.02710, 2021.
  18. A. Majumdar and R. Tedrake, “Funnel libraries for real-time robust feedback motion planning,” The International Journal of Robotics Research, vol. 36, no. 8, pp. 947–982, 2017.
  19. J. Ji, X. Zhou, C. Xu, and F. Gao, “Cmpcc: Corridor-based model predictive contouring control for aggressive drone flight,” in Experimental Robotics: The 17th International Symposium, pp. 37–46, Springer, 2021.
  20. J. Arrizabalaga and M. Ryll, “Towards time-optimal tunnel-following for quadrotors,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 4044–4050, IEEE, 2022.
  21. M. Mueller, S. Lupashin, and R. D’Andrea, “Quadrocopter ball juggling,” pp. 4972–4978, 2011.
  22. R. Mahony, V. Kumar, and P. Corke, “Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor,” vol. 19, no. 3, pp. 20–32, 2012.
  23. D. Mellinger, N. Michael, and V. Kumar, “Trajectory generation and control for precise aggressive maneuvers with quadrotors,” vol. 31, no. 5, pp. 664–674, 2012.
  24. M. W. Mueller, M. Hehn, and R. D’Andrea, “A computationally efficient algorithm for state-to-state quadrocopter trajectory generation and feasibility verification,” 2013.
  25. C. Qin, M. S. Michet, J. Chen, and H. H.-T. Liu, “Time-optimal gate-traversing planner for autonomous drone racing,” arXiv preprint arXiv:2309.06837, 2023.
  26. A. Loquercio, E. Kaufmann, R. Ranftl, A. Dosovitskiy, V. Koltun, and D. Scaramuzza, “Deep drone racing: From simulation to reality with domain randomization,” IEEE Trans. Robotics, vol. 36, no. 1, pp. 1–14, 2019.
  27. E. Kaufmann, L. Bauersfeld, and D. Scaramuzza, “A benchmark comparison of learned control policies for agile quadrotor flight,” in 2022 International Conference on Robotics and Automation (ICRA), IEEE, 2022.
  28. L. O. Rojas-Perez and J. Martinez-Carranza, “Deeppilot: A CNN for autonomous drone racing,” Sensors, vol. 20, no. 16, p. 4524, 2020.
  29. J. Hwangbo, I. Sa, R. Siegwart, and M. Hutter, “Control of a quadrotor with reinforcement learning,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2096–2103, 2017.
  30. W. Koch, R. Mancuso, R. West, and A. Bestavros, “Reinforcement learning for uav attitude control,” ACM Transactions on Cyber-Physical Systems, vol. 3, no. 2, pp. 1–21, 2019.
  31. N. O. Lambert, D. S. Drew, J. Yaconelli, S. Levine, R. Calandra, and K. S. Pister, “Low-level control of a quadrotor with deep model-based reinforcement learning,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4224–4230, 2019.
  32. L. Bauersfeld, E. Kaufmann, P. Foehn, S. Sun, and D. Scaramuzza, “Neurobem: Hybrid aerodynamic quadrotor model,” in Proceedings of Robotics: Science and Systems, 2021.
  33. Y. Song, M. Steinweg, E. Kaufmann, and D. Scaramuzza, “Autonomous drone racing with deep reinforcement learning,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1205–1212, IEEE, 2021.
  34. K. J. Åström, “Theory and applications of adaptive control—a survey,” automatica, vol. 19, no. 5, pp. 471–486, 1983.
  35. M. J. Grimble, “Implicit and explicit lqg self-tuning controllers,” IFAC Proceedings Volumes, vol. 17, no. 2, pp. 941–947, 1984.
  36. K. J. Åström, T. Hägglund, C. C. Hang, and W. K. Ho, “Automatic tuning and adaptation for pid controllers-a survey,” Control Engineering Practice, vol. 1, no. 4, pp. 699–714, 1993.
  37. M. A. Mohd Basri, A. R. Husain, and K. A. Danapalasingam, “Intelligent adaptive backstepping control for mimo uncertain non-linear quadrotor helicopter systems,” Transactions of the Institute of Measurement and Control, vol. 37, no. 3, pp. 345–361, 2015.
  38. A. Schperberg, S. Di Cairano, and M. Menner, “Auto-tuning of controller and online trajectory planner for legged robots,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7802–7809, 2022.
  39. M. Zanon and S. Gros, “Safe reinforcement learning using robust mpc,” IEEE Transactions on Automatic Control, vol. 66, no. 8, pp. 3638–3652, 2020.
  40. S. Cheng, M. Kim, L. Song, C. Yang, Y. Jin, S. Wang, and N. Hovakimyan, “Difftune: Auto-tuning through auto-differentiation,” arXiv preprint arXiv:2209.10021, 2022.
  41. J. De Schutter, M. Zanon, and M. Diehl, “Tunempc—a tool for economic tuning of tracking (n)mpc problems,” IEEE Control Systems Letters, vol. 4, no. 4, pp. 910–915, 2020.
  42. A. Loquercio, A. Saviolo, and D. Scaramuzza, “Autotune: Controller tuning for high-speed flight,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4432–4439, 2022.
  43. W. Edwards, G. Tang, G. Mamakoukas, T. Murphey, and K. Hauser, “Automatic tuning for data-driven model predictive control,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 7379–7385, IEEE, 2021.
  44. M. Menner and M. N. Zeilinger, “Maximum likelihood methods for inverse learning of optimal controllers,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 5266–5272, 2020.
  45. F. Berkenkamp, A. P. Schoellig, and A. Krause, “Safe controller optimization for quadrotors with gaussian processes,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 491–496, IEEE, 2016.
  46. A. Marco, P. Hennig, J. Bohg, S. Schaal, and S. Trimpe, “Automatic lqr tuning based on gaussian process global optimization,” in 2016 IEEE international conference on robotics and automation (ICRA), pp. 270–277, IEEE, 2016.
  47. D. Eriksson, M. Pearce, J. Gardner, R. D. Turner, and M. Poloczek, “Scalable global optimization via local Bayesian optimization,” in Advances in Neural Information Processing Systems, pp. 5496–5507, 2019.
  48. N. A. Spielberg, M. Brown, and J. C. Gerdes, “Neural network model predictive motion control applied to automated driving with unknown friction,” IEEE Transactions on Control Systems Technology, vol. 30, no. 5, pp. 1934–1945, 2021.
  49. K. Y. Chee, T. Z. Jiahao, and M. A. Hsieh, “Knode-mpc: A knowledge-based data-driven predictive control framework for aerial robots,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2819–2826, 2022.
  50. A. Saviolo, G. Li, and G. Loianno, “Physics-inspired temporal learning of quadrotor dynamics for accurate model predictive trajectory tracking,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10256–10263, 2022.
  51. G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou, “Information-theoretic model predictive control: Theory and applications to autonomous driving,” IEEE Transactions on Robotics, vol. 34, no. 6, pp. 1603–1622, 2018.
  52. C. J. Ostafew, A. P. Schoellig, and T. D. Barfoot, “Robust constrained learning-based nmpc enabling reliable mobile robot path tracking,” The International Journal of Robotics Research, vol. 35, no. 13, pp. 1547–1563, 2016.
  53. U. Rosolia and F. Borrelli, “Learning how to autonomously race a car: a predictive control approach,” IEEE Transactions on Control Systems Technology, vol. 28, no. 6, pp. 2713–2719, 2019.
  54. G. Torrente, E. Kaufmann, P. Föhn, and D. Scaramuzza, “Data-driven mpc for quadrotors,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3769–3776, 2021.
  55. M. Mehndiratta, E. Camci, and E. Kayacan, “Automated tuning of nonlinear model predictive controller by reinforcement learning,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3016–3021, IEEE, 2018.
  56. A. Carron, E. Arcari, M. Wermelinger, L. Hewing, M. Hutter, and M. N. Zeilinger, “Data-driven model predictive control for trajectory tracking with a robotic arm,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3758–3765, 2019.
  57. G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou, “Information theoretic mpc for model-based reinforcement learning,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 1714–1721, IEEE, 2017.
  58. A. Lahr, A. Zanelli, A. Carron, and M. N. Zeilinger, “Zero-order optimization for gaussian process-based model predictive control,” European Journal of Control, vol. 74, p. 100862, 2023.
  59. Nob Hill Publishing.
  60. L. Numerow, A. Zanelli, A. Carron, and M. N. Zeilinger, “Inherently robust suboptimal mpc for autonomous racing with anytime feasible sqp,” 2024.
  61. M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy, “BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization,” in Advances in Neural Information Processing Systems 33, 2020.
  62. Y. Song, S. Naji, E. Kaufmann, A. Loquercio, and D. Scaramuzza, “Flightmare: A flexible quadrotor simulator,” in Conference on Robot Learning, 2020.
  63. P. Foehn, E. Kaufmann, A. Romero, R. Penicka, S. Sun, L. Bauersfeld, T. Laengle, G. Cioffi, Y. Song, A. Loquercio, and D. Scaramuzza, “Agilicious: Open-source and open-hardware agile quadrotor for vision-based flight,” Science Robotics, vol. 7, no. 67, 2022.
  64. K. P. Wabersich and M. N. Zeilinger, “Predictive control barrier functions: Enhanced safety mechanisms for learning-based control,” IEEE Transactions on Automatic Control, vol. 68, no. 5, pp. 2638–2651, 2023.
  65. Y. Song, M. Steinweg, E. Kaufmann, and D. Scaramuzza, “Autonomous drone racing with deep reinforcement learning,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1205–1212, 2021.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com