Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Learning Model Predictive Control for Linear Systems (2006.13406v1)

Published 24 Jun 2020 in eess.SY and cs.SY

Abstract: This paper presents a distributed learning model predictive control (DLMPC) scheme for distributed linear time invariant systems with coupled dynamics and state constraints. The proposed solution method is based on an online distributed optimization scheme with nearest-neighbor communication. If the control task is iterative and data from previous feasible iterations are available, local data are exploited by the subsystems in order to construct the local terminal set and terminal cost, which guarantee recursive feasibility and asymptotic stability, as well as performance improvement over iterations. In case a first feasible trajectory is difficult to obtain, or the task is non-iterative, we further propose an algorithm that efficiently explores the state-space and generates the data required for the construction of the terminal cost and terminal constraint in the MPC problem in a safe and distributed way. In contrast to other distributed MPC schemes which use structured positive invariant sets, the proposed approach involves a control invariant set as the terminal set, on which we do not impose any distributed structure. The proposed iterative scheme converges to the global optimal solution of the underlying infinite horizon optimal control problem under mild conditions. Numerical experiments demonstrate the effectiveness of the proposed DLMPC scheme.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yvonne R. Stürz (13 papers)
  2. Edward L. Zhu (10 papers)
  3. Ugo Rosolia (43 papers)
  4. Karl H. Johansson (239 papers)
  5. Francesco Borrelli (105 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.