Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Acceleration of Gradient-based Path Integral Method for Efficient Optimal and Inverse Optimal Control (1710.06578v2)

Published 18 Oct 2017 in cs.SY

Abstract: This paper deals with a new accelerated path integral method, which iteratively searches optimal controls with a small number of iterations. This study is based on the recent observations that a path integral method for reinforcement learning can be interpreted as gradient descent. This observation also applies to an iterative path integral method for optimal control, which sets a convincing argument for utilizing various optimization methods for gradient descent, such as momentum-based acceleration, step-size adaptation and their combination. We introduce these types of methods to the path integral and demonstrate that momentum-based methods, like Nesterov Accelerated Gradient and Adam, can significantly improve the convergence rate to search for optimal controls in simulated control systems. We also demonstrate that the accelerated path integral could improve the performance on model predictive control for various vehicle navigation tasks. Finally, we represent this accelerated path integral method as a recurrent network, which is the accelerated version of the previously proposed path integral networks (PI-Net). We can train the accelerated PI-Net more efficiently for inverse optimal control with less RAM than the original PI-Net.

Citations (19)

Summary

We haven't generated a summary for this paper yet.