Receding Horizon Control Fundamentals
- Receding Horizon Control is a dynamic optimization framework that solves finite-horizon control problems iteratively using current state feedback.
- It ensures stability and feasibility by employing terminal cost design, invariance constraints, and robust performance measures under uncertainty.
- Its adaptability to distributed, stochastic, and data-driven settings makes it a powerful tool for real-time, constraint-bound applications in robotics and automation.
Receding Horizon Control (RHC)—also known as Model Predictive Control (MPC)—is an advanced control paradigm wherein, at each sampling instant, a finite-horizon optimal control problem is solved using the current state as the initial condition, the computed control is applied for a short interval, and the process is repeated with the system’s updated state. Originating in the context of process control, RHC has become the dominant strategy for controlling complex dynamic systems—both deterministic and stochastic, linear and nonlinear, centralized and distributed—under constraints and real-time feedback. The essential feature of RHC is its dynamic, iterative optimization and deployment of control actions over a sliding window or prediction horizon, accommodating disturbance, uncertainty, constraints, and non-stationarity.
1. Core Principles and Formal Structure
The generic RHC loop operates as follows: At time , given state , a finite-horizon optimal control problem is formulated over horizon (or a continuous time ), minimizing a cost functional (often quadratic in state and input) subject to system dynamics and constraints (state, input, etc.). Only the first control input from the computed sequence is applied; at the next step, the process re-solves with updated measurements.
Formally, for a discrete-time LTI system
with constraints , , the RHC problem at each is (see (Kunisch et al., 2018, Breiten et al., 2018)): subject to dynamics, initial state , and constraints, where is the stage cost, the terminal cost. The use of a moving/“receding” finite prediction horizon replaces often intractable infinite-horizon problems by tractable subproblems, relying on suitable choices of terminal penalties/constraints to ensure optimality, stability, and feasibility.
2. Terminal Cost, Invariance, and Stability
The terminal cost and terminal constraint sets are critical for ensuring the closed-loop system’s stability and near-optimality. If the finite-horizon problem employs the exact value function as terminal penalty, RHC recovers the infinite-horizon optimal controller (Kunisch et al., 2018). Practically, one uses quadratic or higher-order Taylor approximations to the value function: and the error in control/cost converges exponentially in the terminal horizon length : where is the decay rate (Kunisch et al., 2018, Breiten et al., 2018). For non-linear or constrained systems, -contractive (positively invariant) sets and contractivity constraints are employed to guarantee uniform ultimate boundedness or exponential stability (Zheng et al., 7 Oct 2025, Arvelo et al., 2012).
3. Data-Driven and Robust Receding Horizon Control
RHC naturally extends to systems with model uncertainty via data-driven or set-membership techniques. In the robust data-driven framework (Zheng et al., 7 Oct 2025), no fixed is assumed; instead, the controller maintains a polytope of all pairs consistent with the data and bounded disturbances, updating as data accumulate: At each step, the RHC solves
subject to robust contractivity for all and disturbances : This linear programming approach tractably enforces universal contractivity with formal guarantees, outperforming batch data-driven methods in both contraction and convergence speed (Zheng et al., 7 Oct 2025).
4. Distributed, Event-Driven, and Large-Scale RHC
RHC methodologies scale to multi-agent, networked, or distributed systems via decomposition and local coordination. Deep-structured RHC for teams (Arabneydi et al., 2021) uses gauge transformations to decouple global and local coordination: In the presence of constraints, tractable local and global quadratic programs (QPs) are solved at each step, with proven feasibility, scalability independent of team size, and computational complexity.
In networked settings, such as stochastic robotic networks for information broadcast (Silva et al., 2022) or persistent monitoring (Welikala et al., 2020, Welikala et al., 2021, Welikala et al., 2020), event-driven RHC triggers optimization at significant events (arrivals, threshold crossings), often leading to closed-form, locally solvable subproblems, with minimal communication. The performance is quantified rigorously (e.g., exponential convergence rates, mean time to broadcast bounds, time-average uncertainties), with learning-based enhancements to further reduce computation (Welikala et al., 2020).
5. RHC under Constraints, Uncertainty, and in Stochastic Regimes
A major strength of RHC lies in its ability to handle constraints—whether interval-wise energy constraints (Arvelo et al., 2012), probabilistic state constraints (Chitraganti et al., 2014, Hokayem et al., 2010, Shah et al., 2012), or temporal logic (LTL) specifications (Svoreňová et al., 2013, Ding et al., 2012, Cai et al., 2020). Probabilistic state constraints are enforced by converting chance constraints to deterministic forms via inverse cdfs, leading to tractable convex programs at each step (Chitraganti et al., 2014). For nonlinear or stochastic systems, RHC decomposes the task into a path-planning layer (using the drift) and a local stochastic optimal controller (solved via linear PDEs or Feynman–Kac representations), providing convergence with high probability under suitable regularity (Shah et al., 2012).
For systems with only input/output access or model-free settings, receding horizon learning schemes combine proximity-constrained estimation with receding horizon optimization, achieving global asymptotic convergence without explicit model knowledge (Ebenbauer et al., 2020, Allibhoy et al., 2020).
6. Applications and Generalizations
RHC is applied across a vast spectrum:
- Swarm robotics and deep-structured teams (Arabneydi et al., 2021)
- Distributed estimation and monitoring (Welikala et al., 2020, Welikala et al., 2021, Welikala et al., 2020)
- Stochastic network control (Silva et al., 2022)
- Automata-theoretic and temporal logic-constrained planning (Svoreňová et al., 2013, Ding et al., 2012, Cai et al., 2020)
- Energy-constrained systems (Arvelo et al., 2012)
- Autonomous driving in dense traffic with proactive interaction using spatiotemporal safety barriers and multiple-shooting NLP solvers (Zheng et al., 2023)
- Infinite-horizon aggregative games via periodic strategy mapping and set-valued receding-horizon dynamical systems, with Lyapunov-based convergence results (Fele et al., 2022)
These applications share the recurring structure of per-step constrained optimization, event- or time-driven replanning, and data-driven or distributed task decomposition—rendering RHC a unifying methodology for tractable, high-performance control.
7. Theoretical Guarantees and Computational Aspects
Rigorous theoretical foundations for RHC include:
- Exponential convergence rates to true infinite-horizon optimal control as horizon increases (Kunisch et al., 2018, Breiten et al., 2018)
- Uniform ultimate boundedness and stability with bounded disturbances and contractivity enforcement (Zheng et al., 7 Oct 2025, Arvelo et al., 2012)
- Explicit Lyapunov and ISS arguments for suboptimal, distributed, or event-driven variants (Allibhoy et al., 2020, Hokayem et al., 2010, Welikala et al., 2020)
- Recursive feasibility and correctness for rich temporal logic tasks via automata-based terminal and energy constraints (Svoreňová et al., 2013, Ding et al., 2012, Cai et al., 2020)
- Complexity that is polynomial in horizon/state dimension and, under suitable decompositions, independent of network or agent population sizes (Arabneydi et al., 2021, Zheng et al., 2023)
- Real-time feasibility in automotive and robotic settings by exploiting structure (multiple-shooting, warm start, convex reformulations) (Zheng et al., 2023)
The ongoing research front continues to expand RHC's scalability, robustness to model uncertainty, learning-awareness, and tight theoretical-performance guarantees—making it a foundational paradigm for modern feedback control design.