Papers
Topics
Authors
Recent
Search
2000 character limit reached

Receding Horizon Control Fundamentals

Updated 17 March 2026
  • Receding Horizon Control is a dynamic optimization framework that solves finite-horizon control problems iteratively using current state feedback.
  • It ensures stability and feasibility by employing terminal cost design, invariance constraints, and robust performance measures under uncertainty.
  • Its adaptability to distributed, stochastic, and data-driven settings makes it a powerful tool for real-time, constraint-bound applications in robotics and automation.

Receding Horizon Control (RHC)—also known as Model Predictive Control (MPC)—is an advanced control paradigm wherein, at each sampling instant, a finite-horizon optimal control problem is solved using the current state as the initial condition, the computed control is applied for a short interval, and the process is repeated with the system’s updated state. Originating in the context of process control, RHC has become the dominant strategy for controlling complex dynamic systems—both deterministic and stochastic, linear and nonlinear, centralized and distributed—under constraints and real-time feedback. The essential feature of RHC is its dynamic, iterative optimization and deployment of control actions over a sliding window or prediction horizon, accommodating disturbance, uncertainty, constraints, and non-stationarity.

1. Core Principles and Formal Structure

The generic RHC loop operates as follows: At time kk, given state xkx_k, a finite-horizon optimal control problem is formulated over horizon NN (or a continuous time TT), minimizing a cost functional (often quadratic in state and input) subject to system dynamics and constraints (state, input, etc.). Only the first control input uku^*_k from the computed sequence {uk,uk+1,...}\{u^*_k, u^*_{k+1}, ...\} is applied; at the next step, the process re-solves with updated measurements.

Formally, for a discrete-time LTI system

xk+1=Axk+Bukx_{k+1} = A x_k + B u_k

with constraints xkXx_k \in \mathcal X, ukUu_k \in \mathcal U, the RHC problem at each kk is (see (Kunisch et al., 2018, Breiten et al., 2018)): minuk:k+N1i=kk+N1(xi,ui)+ϕ(xk+N)\min_{u_{k:k+N-1}} \sum_{i=k}^{k+N-1} \ell(x_i, u_i) + \phi(x_{k+N}) subject to dynamics, initial state xkx_k, and constraints, where ()\ell(\cdot) is the stage cost, ϕ()\phi(\cdot) the terminal cost. The use of a moving/“receding” finite prediction horizon replaces often intractable infinite-horizon problems by tractable subproblems, relying on suitable choices of terminal penalties/constraints to ensure optimality, stability, and feasibility.

2. Terminal Cost, Invariance, and Stability

The terminal cost ϕ(xk+N)\phi(x_{k+N}) and terminal constraint sets are critical for ensuring the closed-loop system’s stability and near-optimality. If the finite-horizon problem employs the exact value function as terminal penalty, RHC recovers the infinite-horizon optimal controller (Kunisch et al., 2018). Practically, one uses quadratic or higher-order Taylor approximations to the value function: Vk(y)=j=2k1j!DjV(0)[y,,y]V_k(y) = \sum_{j=2}^k \frac{1}{j!} D^jV(0)[y,\ldots,y] and the error in control/cost converges exponentially in the terminal horizon length TT: uRHCuL2(0,)Meλ(Tτ)λkTy0k\|u_{\text{RHC}} - u^*\|_{L^2(0,\infty)} \leq M e^{-\lambda(T-\tau)-\lambda k T} \|y_0\|^k where λ\lambda is the decay rate (Kunisch et al., 2018, Breiten et al., 2018). For non-linear or constrained systems, λ\lambda-contractive (positively invariant) sets and contractivity constraints are employed to guarantee uniform ultimate boundedness or exponential stability (Zheng et al., 7 Oct 2025, Arvelo et al., 2012).

3. Data-Driven and Robust Receding Horizon Control

RHC naturally extends to systems with model uncertainty via data-driven or set-membership techniques. In the robust data-driven framework (Zheng et al., 7 Oct 2025), no fixed (A,B)(A,B) is assumed; instead, the controller maintains a polytope C(D)\mathcal C(\mathcal D) of all (A,B)(A,B) pairs consistent with the data and bounded disturbances, updating C\mathcal C as data accumulate: C(D)={(A,B):V(Axk+Bukxk+1)1, k}\mathcal C(\mathcal D) = \left\{(A,B): \|V(A x_k + B u_k - x_{k+1})\|_\infty \leq 1, ~\forall k \right\} At each step, the RHC solves

minλ,u λ\min_{\lambda, u}~ \lambda

subject to robust contractivity for all (A,B)C(D)(A,B)\in\mathcal C(\mathcal D) and disturbances vv: ψX(Axk+Bu+v)λψX(xk)\psi_{\mathcal X}(A x_k+B u+v) \leq \lambda\,\psi_{\mathcal X}(x_k) This linear programming approach tractably enforces universal contractivity with formal guarantees, outperforming batch data-driven methods in both contraction and convergence speed (Zheng et al., 7 Oct 2025).

4. Distributed, Event-Driven, and Large-Scale RHC

RHC methodologies scale to multi-agent, networked, or distributed systems via decomposition and local coordination. Deep-structured RHC for teams (Arabneydi et al., 2021) uses gauge transformations to decouple global and local coordination: uti=θtxti+αiγi(θˉtθt)xˉtα+u_t^i = \theta_t^* x_t^i + \frac{\alpha_i}{\gamma_i}(\bar\theta_t^* - \theta_t^*)\bar x_t^\alpha + \ldots In the presence of constraints, tractable local and global quadratic programs (QPs) are solved at each step, with proven feasibility, scalability independent of team size, and O(H3d3)O(H^3 d^3) computational complexity.

In networked settings, such as stochastic robotic networks for information broadcast (Silva et al., 2022) or persistent monitoring (Welikala et al., 2020, Welikala et al., 2021, Welikala et al., 2020), event-driven RHC triggers optimization at significant events (arrivals, threshold crossings), often leading to closed-form, locally solvable subproblems, with minimal communication. The performance is quantified rigorously (e.g., exponential convergence rates, mean time to broadcast bounds, time-average uncertainties), with learning-based enhancements to further reduce computation (Welikala et al., 2020).

5. RHC under Constraints, Uncertainty, and in Stochastic Regimes

A major strength of RHC lies in its ability to handle constraints—whether interval-wise energy constraints (Arvelo et al., 2012), probabilistic state constraints (Chitraganti et al., 2014, Hokayem et al., 2010, Shah et al., 2012), or temporal logic (LTL) specifications (Svoreňová et al., 2013, Ding et al., 2012, Cai et al., 2020). Probabilistic state constraints are enforced by converting chance constraints to deterministic forms via inverse cdfs, leading to tractable convex programs at each step (Chitraganti et al., 2014). For nonlinear or stochastic systems, RHC decomposes the task into a path-planning layer (using the drift) and a local stochastic optimal controller (solved via linear PDEs or Feynman–Kac representations), providing convergence with high probability under suitable regularity (Shah et al., 2012).

For systems with only input/output access or model-free settings, receding horizon learning schemes combine proximity-constrained estimation with receding horizon optimization, achieving global asymptotic convergence without explicit model knowledge (Ebenbauer et al., 2020, Allibhoy et al., 2020).

6. Applications and Generalizations

RHC is applied across a vast spectrum:

These applications share the recurring structure of per-step constrained optimization, event- or time-driven replanning, and data-driven or distributed task decomposition—rendering RHC a unifying methodology for tractable, high-performance control.

7. Theoretical Guarantees and Computational Aspects

Rigorous theoretical foundations for RHC include:

The ongoing research front continues to expand RHC's scalability, robustness to model uncertainty, learning-awareness, and tight theoretical-performance guarantees—making it a foundational paradigm for modern feedback control design.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Receding Horizon Control.