Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning $Q$-function approximations for hybrid control problems

Published 28 May 2021 in math.OC | (2105.13517v2)

Abstract: The main challenge in controlling hybrid systems arises from having to consider an exponential number of sequences of future modes to make good long-term decisions. Model predictive control (MPC) computes a control action through a finite-horizon optimisation problem. A key ingredient in this problem is a terminal cost, to account for the system's evolution beyond the chosen horizon. A good terminal cost can reduce the horizon length required for good control action and is often tuned empirically by observing performance. We build on the idea of using $N$-step $Q$-functions $(\mathcal{Q}{(N)})$ in the MPC objective to avoid having to choose a terminal cost. We present a formulation incorporating the system dynamics and constraints to approximate the optimal $\mathcal{Q}{(N)}$-function and algorithms to train the approximation parameters through an exploration of the state space. We test the control policy derived from the trained approximations on two benchmark problems through simulations and observe that our algorithms are able to learn good $\mathcal{Q}{(N)}$-approximations for high dimensional hybrid systems based on a relatively small dataset. Finally, we compare our controller's performance against that of Hybrid MPC in terms of computation time and closed-loop cost.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.