Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AC4MPC: Actor-Critic Reinforcement Learning for Nonlinear Model Predictive Control (2406.03995v1)

Published 6 Jun 2024 in eess.SY, cs.AI, and cs.SY

Abstract: \Ac{MPC} and \ac{RL} are two powerful control strategies with, arguably, complementary advantages. In this work, we show how actor-critic \ac{RL} techniques can be leveraged to improve the performance of \ac{MPC}. The \ac{RL} critic is used as an approximation of the optimal value function, and an actor roll-out provides an initial guess for primal variables of the \ac{MPC}. A parallel control architecture is proposed where each \ac{MPC} instance is solved twice for different initial guesses. Besides the actor roll-out initialization, a shifted initialization from the previous solution is used. Thereafter, the actor and the critic are again used to approximately evaluate the infinite horizon cost of these trajectories. The control actions from the lowest-cost trajectory are applied to the system at each time step. We establish that the proposed algorithm is guaranteed to outperform the original \ac{RL} policy plus an error term that depends on the accuracy of the critic and decays with the horizon length of the \ac{MPC} formulation. Moreover, we do not require globally optimal solutions for these guarantees to hold. The approach is demonstrated on an illustrative toy example and an \ac{AD} overtaking scenario.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rudolf Reiter (10 papers)
  2. Andrea Ghezzi (7 papers)
  3. Katrin Baumgärtner (14 papers)
  4. Jasper Hoffmann (7 papers)
  5. Robert D. McAllister (4 papers)
  6. Moritz Diehl (96 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.