Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Harmonic Control Lyapunov Barrier Functions for Constrained Optimal Control with Reach-Avoid Specifications (2310.02869v2)

Published 4 Oct 2023 in math.OC, cs.LG, and math.AP

Abstract: This paper introduces harmonic control Lyapunov barrier functions (harmonic CLBF) that aid in constrained control problems such as reach-avoid problems. Harmonic CLBFs exploit the maximum principle that harmonic functions satisfy to encode the properties of control Lyapunov barrier functions (CLBFs). As a result, they can be initiated at the start of an experiment rather than trained based on sample trajectories. The control inputs are selected to maximize the inner product of the system dynamics with the steepest descent direction of the harmonic CLBF. Numerical results are presented with four different systems under different reach-avoid environments. Harmonic CLBFs show a significantly low risk of entering unsafe regions and a high probability of entering the goal region.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Dario Amodei Alex Ray, Joshua Achiam. Benchmarking safe exploration in deep reinforcement learning. OpenAI, 2019.
  2. Harmonic function theory, volume 137. Springer Science & Business Media, 2013.
  3. Reinforcement learning for safety-critical control under model uncertainty, using control lyapunov functions and control barrier functions. arXiv preprint arXiv:2004.07584, 2020.
  4. Path planning using laplace’s equation. In Proceedings., IEEE International Conference on Robotics and Automation, pages 2102–2106. IEEE, 1990.
  5. Safe nonlinear control using robust neural lyapunov-barrier functions. In 5th Annual Conference on Robot Learning, 2021.
  6. Reinforcement learning for safe robot control using control Lyapunov barrier functions, 2023.
  7. Escaping from saddle points — online stochastic gradient for tensor decomposition, 2015.
  8. Elliptic Partial Differential Equations of Second Order, Second Edition. Springer-Verlag, 1983.
  9. Towards a framework for realizable safety critical control through active set invariance. In 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pages 98–106. IEEE, 2018.
  10. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017.
  11. Ching-An Cheng Hoai-An Nguyen. Provable reset-free reinforcement learning by no-regret reduction. International Conference on Machine Learning, 2023.
  12. Real-time obstacle avoidance using harmonic potential functions. IEEE Transactions on Robotics and Automation, June 1992.
  13. Gradient descent converges to minimizers, 2016.
  14. A. Logg and G. N. Wells. Dolfin: Automated finite element computing. ACM Transactions on Mathematical Software 37, 2010.
  15. Modern Robotics: Mechanics, Planning, and Control. Cambridge University Press, 2017.
  16. Smooth converse lyapunov-barrier theorems for asymptotic stability with safety constraints and reach-avoid-stay specifications. Automatica, 144:110478, 2022.
  17. Maximum Principles in Differential Equations. Springer, 1984.
  18. S. Kawamura S. Akishita and K. Hayashi. Laplace potential for moving obstacle avoidance and approach of a mobile robot. In I990 Japan-USA Symposium on Flexible Automation, A Pacific Rim Conference, 1990.
  19. Reinforcement learning: An introduction. MIT press, 2018.
  20. A Yanushauskas. The zeros of the gradient and the hessian of an harmonic function. Siberian Mathematical Journal, 10(3):497–501, 1969.
  21. Safe-control-gym: A unified benchmark suite for safe learning-based control and reinforcement learning in robotics. IEEE Robotics and Automation Letters, 7(4):11142–11149, 2022.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.