Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 194 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 458 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Pontryagin Optimal Control via Neural Networks (2212.14566v3)

Published 30 Dec 2022 in eess.SY, cs.LG, and cs.SY

Abstract: Solving real-world optimal control problems are challenging tasks, as the complex, high-dimensional system dynamics are usually unrevealed to the decision maker. It is thus hard to find the optimal control actions numerically. To deal with such modeling and computation challenges, in this paper, we integrate Neural Networks with the Pontryagin's Maximum Principle (PMP), and propose a sample efficient framework NN-PMP-Gradient. The resulting controller can be implemented for systems with unknown and complex dynamics. By taking an iterative approach, the proposed framework not only utilizes the accurate surrogate models parameterized by neural networks, it also efficiently recovers the optimality conditions along with the optimal action sequences via PMP conditions. Numerical simulations on Linear Quadratic Regulator, energy arbitrage of grid-connected lossy battery, control of single pendulum, and two MuJoCo locomotion tasks demonstrate our proposed NN-PMP-Gradient is a general and versatile computation tool for finding optimal solutions. And compared with the widely applied model-free and model-based reinforcement learning (RL) algorithms, our NN-PMP-Gradient achieves higher sample-efficiency and performance in terms of control objectives.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. R. S. Sutton, A. G. Barto, and R. J. Williams, “Reinforcement learning is direct adaptive optimal control,” IEEE control systems magazine, vol. 12, no. 2, pp. 19–22, 1992.
  2. W. Jin et al., “Learning from sparse demonstrations,” IEEE Transactions on Robotics, 2022.
  3. R. Bellman, “Dynamic programming,” Science, vol. 153, no. 3731, pp. 34–37, 1966.
  4. Y. Wang and S. Boyd, “Fast model predictive control using online optimization,” IEEE Transactions on control systems technology, vol. 18, no. 2, pp. 267–278, 2009.
  5. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013.
  6. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
  7. A. Verma, V. Murali, R. Singh, P. Kohli, and S. Chaudhuri, “Programmatically interpretable reinforcement learning,” in International Conference on Machine Learning.   PMLR, 2018, pp. 5045–5054.
  8. J. Garcıa and F. Fernández, “A comprehensive survey on safe reinforcement learning,” Journal of Machine Learning Research, vol. 16, no. 1, pp. 1437–1480, 2015.
  9. M. Zanon and S. Gros, “Safe reinforcement learning using robust mpc,” IEEE Transactions on Automatic Control, 2021.
  10. C. Atkeson and J. Santamaria, “A comparison of direct and model-based reinforcement learning,” in Proceedings of International Conference on Robotics and Automation, vol. 4, 1997, pp. 3557–3564.
  11. A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine, “Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 7559–7566.
  12. S. Levine and V. Koltun, “Learning complex neural network policies with trajectory optimization,” in Proceedings of the 31st International Conference on Machine Learning, vol. 32, no. 2.   PMLR, 2014.
  13. Y. Chen, Y. Shi, and B. Zhang, “Optimal control via neural networks: A convex approach,” in International Conference on Learning Representations, 2019.
  14. A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine, “Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 7559–7566.
  15. T. Dinev, C. Mastalli, V. Ivan, S. Tonneau, and S. Vijayakumar, “Differentiable optimal control via differential dynamic programming,” arXiv preprint arXiv:2209.01117, 2022.
  16. N. R. Chowdhury, R. Ofir, N. Zargari, D. Baimel, J. Belikov, and Y. Levron, “Optimal control of lossy energy storage systems with nonlinear efficiency based on dynamic programming and pontryagin’s minimum principle,” IEEE Transactions on Energy Conversion, vol. 36, no. 1, pp. 524–533, 2020.
  17. D. Ivanov and B. Sokolov, “Dynamic supply chain scheduling,” Journal of scheduling, vol. 15, no. 2, pp. 201–216, 2012.
  18. G. Wang and Z. Yu, “A pontryagin’s maximum principle for non-zero sum differential games of bsdes with applications,” IEEE Transactions on Automatic control, vol. 55, no. 7, pp. 1742–1747, 2010.
  19. W. Jin, Z. Wang, Z. Yang, and S. Mou, “Pontryagin differentiable programming: An end-to-end learning and control framework,” Advances in Neural Information Processing Systems, vol. 33, 2020.
  20. L. Böttcher, N. Antulov-Fantulin, and T. Asikis, “Ai pontryagin or how artificial neural networks learn to control dynamical systems,” Nature communications, vol. 13, no. 1, p. 333, 2022.
  21. S. Engin and V. Isler, “Neural optimal control using learned system dynamics,” arXiv preprint arXiv:2302.09846, 2023.
  22. C. Gu, X. Hui, and Y. Chen, “Pontryagin optimal control via neural networks,” Online Preprint Version, arXiv:2212.14566, 2022.
  23. R. M. Dell and D. A. J. Rand, “Energy storage—a key technology for global energy sustainability,” Journal of power sources, vol. 100, no. 1-2, pp. 2–17, 2001.
  24. E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2012, pp. 5026–5033.
  25. M. Franceschetti, C. Lacoux, R. Ohouens, A. Raffin, and O. Sigaud, “Making reinforcement learning work on swimmer,” arXiv preprint arXiv:2208.07587, 2022.
  26. K. Khamaru and M. Wainwright, “Convergence guarantees for a class of non-convex and non-smooth optimization problems,” in International Conference on Machine Learning.   PMLR, 2018.
  27. J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht, “Gradient descent only converges to minimizers,” in Conference on learning theory.   PMLR, 2016, pp. 1246–1257.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.