Papers
Topics
Authors
Recent
Search
2000 character limit reached

Receding Update Horizon in Optimal Control

Updated 5 March 2026
  • Receding Update Horizon is a method that iteratively solves truncated optimality or feasibility problems over a sliding, finite window to generate responsive and tractable control actions.
  • It employs optimization techniques with either fixed or adaptive horizons to integrate state estimation, constraint management, and uncertainty awareness in real time.
  • This approach underpins a range of applications including model predictive control, fault estimation, and distributed coordination, offering stability and performance guarantees.

A receding update horizon refers to the systematic, iterative strategy of solving and applying only the earliest elements of plans, estimates, or control sequences that are (re-)optimized over a finite window sliding forward in time, iteration, or task sequence. At each update, the finite “look-ahead” or “planning” horizon traces the temporal window (or similar construct) over which an optimality or feasibility problem is solved, then is receded (advanced) for the next execution or estimate. This paradigm underpins a broad class of receding horizon techniques—also called “model predictive control” (MPC), “moving horizon estimation” (MHE), “receding horizon planning,” and their analogues in game theory, fault estimation, distributed optimization, iterative learning, data-driven control, and beyond.

1. Core Principles of the Receding Update Horizon

The receding update horizon framework is characterized by repeatedly solving a truncated optimality or feasibility problem over a finite future window, using newly measured or estimated states, and applying or implementing only the immediate part of the computed solution. The horizon is then slid forward—typically by one step (control, time, iteration, or other appropriate index)—and the problem is re-solved at the next decision point. This recursive structure is fundamental to real-time MPC, moving horizon estimation, and numerous planning and learning variants (Bhattacharya et al., 2014, Martin et al., 2023, Weng et al., 23 Jun 2025, Fele et al., 2022).

Key schematic steps are:

  1. Measure or estimate the current system state or information vector.
  2. Solve a finite-horizon optimization, estimation, or game-theoretic equilibrium problem using this state as the initial condition.
  3. Apply or implement only the earliest part (e.g., the first action, estimate, or planned trajectory segment).
  4. Advance the horizon (update the time, task, iteration, or block) and repeat.

This structure offers a powerful balance between responsiveness (explicit feedback to actual states or measurements) and tractability (finite computation per cycle), while allowing integration of constraints, uncertainty models, and learning updates at each step.

2. Mathematical Formalism and Variants

The receding update horizon admits multiple mathematical formulations, varying by domain, dynamics, and optimization type:

  • Classic finite-horizon MPC/MHE: At time kk, solve:

minu() i=0N1(xi,ui)+Vf(xN)\min_{u(\cdot)} \ \sum_{i=0}^{N-1}\ell(x_i,u_i) + V_f(x_N)

subject to state and control constraints, and with dynamic constraints xi+1=f(xi,ui)x_{i+1} = f(x_i,u_i). Only u0u_0^* is applied; at k+1k+1 the horizon is shifted and the problem is solved anew (Bhattacharya et al., 2014, Weng et al., 23 Jun 2025).

  • Stochastic/Moment-based MPC: For systems with probabilistic parameters, the expectation or higher moment of the cost and constraints are optimized at each receding step. Conversion to deterministic lifted systems (e.g., via polynomial chaos expansions) enables tractable on-line receding-horizon updates (Bhattacharya et al., 2014).
  • Game-theoretic receding horizon: Strategies for all players are optimized over T-step windows, under periodic constraints or aggregative interactions; only the first block is implemented, and the block plan is shifted forward (Fele et al., 2022, Benenati et al., 2024).
  • Learning-based or adaptive horizon updates: In some approaches, the prediction horizon itself is varied adaptively (e.g., as a function of time or sample count), or the terminal penalty weight is increased over time to enforce desired convergence properties (Shi et al., 2023, Ebenbauer et al., 2020).
  • Task and motion planning (receding window TAMP): A finite sequence of symbolic and geometric decisions is planned in each horizon; the first action is executed, the window is receded, and re-planning occurs (Castaman et al., 2020).

3. Update Mechanisms and Horizon Management

The receding update horizon can involve various management strategies:

  • Fixed Horizon and Update Interval: Most commonly, a fixed window length NN is chosen and advanced by a fixed interval (typically one step); stability and performance guarantees depend on adequacy of NN and terminal costs (Bhattacharya et al., 2014, Azmi et al., 2024, Sun et al., 2023).
  • Adaptive Horizon/Solver Parameters: Adaptive schemes increase the prediction horizon as a logarithmic or resource-driven function (e.g., in learning-based control or planning), or adapt the terminal weight or terminal set to regulate closed-loop performance or regret (Shi et al., 2023, Ebenbauer et al., 2020, Lukina et al., 2016).
  • Block-wise or Structured Updates: Block-level (multi-step or multi-iteration) updates, periodic horizon shifts, or structured constraints may be used, e.g., best-response updates over T-slot windows in aggregative games (Fele et al., 2022), block parameterizations in estimation (Wallace et al., 2018), or event-triggered update intervals (Martin et al., 2023).
  • Multi-level or Multi-fidelity Execution: In some advanced planning or multi-agent settings, multiple nested or staggered horizons—such as a short execution horizon plus a longer, convex or approximate prediction horizon ("tail cost")—are used to balance computational tractability with performance (Wang et al., 2023).

4. Performance, Stability, and Suboptimality Guarantees

Receding update horizon algorithms are typically analyzed for stability, closed-loop performance, suboptimality, or regret relative to infinite-horizon or full-information solutions:

  • Lyapunov and ISS Arguments: Standard proofs use a Lyapunov function or contraction property, showing that the cost, system energy, or estimation error decreases at each receding step. This typically requires the horizon NN and terminal penalty/cost be chosen in accordance with the system's settling time and stabilization properties (Bhattacharya et al., 2014, Azmi et al., 2024, Kunisch et al., 2018, Weng et al., 23 Jun 2025).
  • Regret and Competitive Analysis: For adversarial or online settings, bounded regret relative to a clairvoyant reference policy can be shown, as in receding-horizon regret-MPC. Such guarantees can hold even for infrequent updates, provided the terminal ingredients and horizon are chosen properly (Martin et al., 2023).
  • Suboptimality vs. Model Uncertainty: Analysis of suboptimality in receding horizon LQ control with uncertain models illustrates the trade-off between prediction horizon, terminal penalty accuracy, and modeling error; for small model error, longer horizons are optimal, whereas for precise terminal penalty, short horizons are preferable. Adaptive horizon schemes can assure sublinear regret in learning-based control (Shi et al., 2023).
  • Periodic and Aggregative Convergence: In periodic or aggregative receding horizon games, set-valued Lyapunov functions and invariance arguments guarantee convergence to periodic equilibria under very general conditions, even in the presence of changing agent populations or constraints (Fele et al., 2022).

5. Implementation Strategies and Algorithmic Structures

Algorithmic design in receding update horizon frameworks is dominated by the need to efficiently solve finite-horizon optimization or estimation subproblems, and to do so in real time, often under decentralized or distributed architectures:

  • Lifted or Augmented System Representations: Lifting uncertain, hybrid, or iteration-to-iteration system models allows the receding horizon problem to be cast as a higher-dimensional, deterministic optimization (e.g., via generalized polynomial chaos (Bhattacharya et al., 2014), block-aggregation (Wu et al., 2021), or multi-level lifting (Sun et al., 2023)).
  • Online Model Learning and Adaptive Estimation: In data-driven and learning-based settings, parameter estimation and prediction are updated at each receding step via sliding window regression, proximity searches, or recursive system identification (Ebenbauer et al., 2020, Allibhoy et al., 2020).
  • Structured Constraint Handling: Probabilistic constraints, state-dependent switching, or combinatorial structure are encoded by reformulating constraints in expectation, as variance bounds, via deterministic surrogates (e.g., chance constraints as quantile thresholds (Chitraganti et al., 2014)) or blockwise plans (Lukina et al., 2016).
  • Distributed and Multi-agent Coordination: Decomposition into agent-wise, block-wise, or scenario-wise updates enables distributed optimization, iterative best-response over receding blocks, and scalable horizon updates (Fele et al., 2022, Allibhoy et al., 2020, Wang et al., 2023).
  • Event-driven and Adaptive Update Intervals: The update interval may be dynamic, as in event-triggered receding horizon architectures where updates are scheduled based on error thresholds, resource constraints, or application-specific criteria (Martin et al., 2023, Wallace et al., 2018).

6. Application Domains and Extensions

The receding update horizon construct underpins technical advances across a wide range of domains:

  • Robust Control and Planning: Real-time MPC for uncertain systems, stochastic MPC using polynomial chaos lifting, nonconvex receding-horizon trajectory planning in robotics and vehicles (Bhattacharya et al., 2014, Bergman et al., 2019).
  • Learning and Data-driven Estimation: Adaptive horizon scaling in learning-based LQ control, horizon-adaptive iterative learning control, distributed data-driven MPC (Shi et al., 2023, Wu et al., 2021, Allibhoy et al., 2020).
  • Game Theory and Distributed Optimization: Infinite-horizon aggregative games, receding-horizon strategy evolution with periodic or block constraints, Nash equilibrium computation by receding-horizon variational inequalities (Fele et al., 2022, Benenati et al., 2024).
  • Estimation and Fault Diagnosis: Moving horizon estimation for nonlinear systems, sliding window receding update in GNSS localization, data-driven robust receding-horizon fault estimation (Weng et al., 23 Jun 2025, Wan et al., 2015).
  • Task and Motion Planning under Uncertainty: Online TAMP under scene dynamics and partial observability, receding window planning with real-time responsiveness (Castaman et al., 2020).
  • Adaptive Optimization and Intelligent Autonomy: Adaptive receding horizon sizes in high-dimensional stochastic planning, switching between multiple fidelity models or learned local value functions to guide the update horizon (Wang et al., 2023, Lukina et al., 2016).

In summary, the receding update horizon is a foundational construct in contemporary optimization-based control, estimation, planning, and multi-agent systems, offering a unified mechanism for online, adaptive, and stabilizing feedback under finite resources and uncertainty. Rigorous analysis—rooted in Lyapunov or regret arguments, contraction theory, and operator-theoretic techniques—guarantees performance and stability across many problem classes (Bhattacharya et al., 2014, Martin et al., 2023, Fele et al., 2022, Shi et al., 2023, Weng et al., 23 Jun 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Receding Update Horizon.