Receding Horizon Approach
- Receding horizon approach is an optimization-based methodology that solves finite-horizon problems to guide real-time decision-making in dynamic settings.
- It integrates updated state feedback and adaptive replanning to mitigate disturbances and model mismatches, enhancing overall system robustness.
- This method is widely used in MPC, motion planning, and hybrid control, offering theoretical guarantees and practical applications in complex systems.
A receding horizon approach is an optimization-based methodology in which planning, decision, or control is performed by solving a finite-horizon subproblem at each time step, applying only the first input (or action), and then shifting (or "receding") the planning window forward to repeat the process with new state measurements or information. This paradigm is central to real-time optimal control (commonly referred to as Model Predictive Control, or MPC), motion planning, multi-agent coordination, and decision-making in time-varying, uncertain, or dynamic environments. At each iteration, the approach enables real-time adaptive response to disturbances and model mismatch by explicitly incorporating state feedback and updated information, leading to improved robustness and tractability relative to open-loop or full-horizon methods.
1. Mathematical Principles and Unified Formulation
The core of the receding horizon approach is the repeated solution of a parameterized finite-horizon optimization problem, typically of the form: subject to
and constraints on , possibly including state/input sets, environmental constraints, or chance constraints. Only is executed, then the procedure is repeated from the new state . The horizon length is typically fixed, but may be variable to accommodate task- or resource-specific constraints (Arvelo et al., 2012).
A distinguishing feature is the explicit separation between the "execution horizon"—over which the next input is applied—and the "prediction horizon"—over which the future system behavior is optimized or reasoned about.
In task and motion planning, or hybrid systems, the receding horizon window may be over sequences of actions or discrete decisions rather than just control inputs (Castaman et al., 2020, Tavassoli, 2018).
2. Algorithmic Implementations and Variations
Several canonical instantiations and algorithmic patterns of the receding horizon approach have been developed:
- Standard Model Predictive Control (MPC): At each time step, solve an optimal control problem over a fixed finite horizon with state and input constraints, apply the first input, shift the horizon, and repeat. Rigorous stability and recursive feasibility can be enforced via terminal cost and constraint design (Chen et al., 2023, Breiten et al., 2018).
- Event-Driven RHC: Optimization is triggered by events such as agent arrivals/departures, target uncertainty hitting thresholds, or environmental state changes, rather than at fixed sampling intervals. The planning horizon can itself be optimized as part of the decision variables (Welikala et al., 2020).
- Receding Horizon for Hybrid Systems: For systems with discrete and continuous dynamics, the receding horizon approach solves a sequence of finite-horizon hybrid optimal control problems, often combining branch-and-bound over discrete sequences with continuous optimality conditions (hybrid maximum principle) (Tavassoli, 2018).
- Task and Motion Planning: Windowed symbolic task planning is integrated with motion planning over a finite receding horizon, providing real-time adaptability in dynamic or stochastic environments (Castaman et al., 2020).
- Scheduling and Assignment: Multi-agent task scheduling, incorporating task-agent matching and resource constraints, is optimized over a rolling finite horizon; only the immediate assignments are executed before replanning (Emam et al., 2020).
The table below summarizes several representative receding horizon instantiations.
| Domain/Problem | Optimization Variables | Horizon Logic |
|---|---|---|
| Classical MPC (continuous) | Fixed , replan | |
| Event-driven RHC | Event-triggered, -optimized | |
| Hybrid optimal control | Finite horizon—branch&bound | |
| Task and motion planning | actions, replan | |
| Multi-agent scheduling | assignments, rolling window |
3. Stability, Feasibility, and Performance Guarantees
Rigorous properties of receding horizon methods hinge on horizon length, terminal cost/constraint design, and system properties (stabilizability, detectability). Fundamental results include:
- Classical terminal cost conditions: The addition of a control Lyapunov-based terminal cost and constraint can guarantee asymptotic stability and recursive feasibility, provided certain decrease conditions are met (Chen et al., 2023, Breiten et al., 2018).
- Contractive/energy constraints: For interval-wise energy-constrained systems, varying-horizon RHC and contractive terminal sets ensure analytic asymptotic stability under energy limits (Arvelo et al., 2012).
- Exponential Turnpike & Performance Approximation: In (possibly infinite-dimensional) linear-quadratic settings, the RHC-generated control converges exponentially to the infinite-horizon optimal trajectory as the horizon increases, by leveraging the turnpike property and tailored terminal costs (Breiten et al., 2018, Sun et al., 2023).
- Monotonicity, Regret, and Learning: Regret-minimizing RHC delivers bounded competitive performance, with guarantees on robustness and cost relative to clairvoyant or best-in-hindsight controllers (Martin et al., 2023, Muthirayan et al., 2020).
- Data-driven robustification: Polyhedral set-membership updating with a receding horizon structure achieves explicit robust stability and contractivity guarantees without reliance on persistent excitation (Zheng et al., 7 Oct 2025).
4. Extensions: Multi-Agent, Uncertainty, and Nonlinear/Hybrid Systems
Receding horizon methods are widely adapted for complex scenarios:
- Multi-agent systems: Cooperative agents coordinate over finite planning windows, with RHC providing modularity, robustness to agent/team changes, and tractability for large-scale assignment problems (Emam et al., 2020).
- Uncertainty and Constraints: RHC can accommodate probabilistic state constraints (using chance-constrained programming and deterministic equivalents), adversarial disturbances, or explicit robustness margins. Chance constraints are reformulated as deterministic constraints via quantile functions, enabling convex programs for systems affected by unbounded Gaussian disturbances (Chitraganti et al., 2014, Zhang et al., 2021, Martin et al., 2023, Zheng et al., 7 Oct 2025).
- Hybrid Dynamics: For systems with both discrete and continuous control authority, receding horizon hybrid optimal control uses indirect methods (e.g., hybrid maximum principle) and branch-and-bound logic for optimal blending of discrete and continuous trajectories (Tavassoli, 2018).
- Manipulation and Multi-Contact Planning: Learned or relaxed value functions are integrated with RHP to address non-convex, contact-rich robot manipulation and legged locomotion, enabling real-time response in uncertain, dynamic environments (Bejjani et al., 2018, Wang et al., 2023).
5. Practical Applications and System-Level Design
The receding horizon paradigm underpins high-performance autonomous systems and decision-making architectures:
- Robot motion and trajectory planning: Reachability-based trajectory design (RTD) unifies safety, dynamic feasibility, and real-time computation through offline reachable sets and efficient obstacle representations, enabling provably safe high-speed navigation in unknown environments while maintaining persistent feasibility (Kousik et al., 2018).
- Task and resource scheduling: Rolling-horizon optimization enables heterogeneous agent teams to achieve superior makespan and resource utilization compared to greedy/shortsighted baselines, with support for dynamic task arrivals and failures (Emam et al., 2020).
- Inspection and coverage: UAV-based receding-horizon mixed-integer planning fuses vehicle dynamics and sensor/FOV constraints, yielding real-time, collision-free, and coverage-optimal trajectories even in cluttered 3D spaces (Papaioannou et al., 2023).
- Crowd-aware robot navigation: Receding horizon optimization seamlessly integrates online spatial-temporal crowd density estimates for robot path planning, fusing multi-modal predictive memory with receding waypoint optimization and yielding efficient, anomaly-adaptive trajectories (Ge et al., 2023).
- Persistent monitoring: Distributed, event-driven receding horizon control delivers globally optimal event-based decisions for persistent monitoring in networked agent systems, with parameter-free, closed-form optimization at each event (Welikala et al., 2020).
6. Computational and Theoretical Considerations
The effectiveness and efficiency of receding horizon methods depend on several key computational and theoretical choices:
- Solver architecture: Depending on problem structure (continuous, hybrid, integer, stochastic), optimization problems may employ convex QP/SDP solvers, MIP for combinatorial logic, or specialized algebraic solvers (e.g., for the hybrid maximum principle).
- Terminal cost and constraint selection: Design of the terminal ingredients is pivotal for stability, convergence, and recursive feasibility, with complementary innovations allowing for stability without conservative terminal penalties (Chen et al., 2023, Breiten et al., 2018).
- Online update and pruning strategies: In scheduling and hybrid settings, branch-and-bound and lower-bound cost heuristics enable real-time feasibility for complex, large-scale assignment and decision problems.
- Data-driven and learning extensions: Recent methods replace analytic models with data-driven system set-membership identification, enabling robust receding horizon control under bounded-data uncertainty, or integrate RL for heuristic value function improvement (Zheng et al., 7 Oct 2025, Bejjani et al., 2018).
- Horizon length and trade-offs: Increasing the horizon improves closeness to infinite-horizon or clairvoyant optimality but increases computation. Empirical studies reveal critical diminishing returns beyond moderate horizon lengths (Sun et al., 2023, Bergman et al., 2019).
7. Impact, Generality, and Future Directions
The receding horizon approach is foundational in systems and control theory, robotics, multi-agent systems, and real-time decision-making. Its modularity—enabling incorporation of physics, resource constraints, uncertainty, and hybrid logic—has facilitated widespread adoption for safety-critical and mission-critical applications. Current research continues to advance theoretical performance bounds (e.g., in regret minimization, stochastic optimization) (Martin et al., 2023, Sun et al., 2023), computational efficacy for high-dimensional and hybrid systems (Wang et al., 2023, Tavassoli, 2018), and integration with learning-based components to handle model misspecification and adaptivity (Zheng et al., 7 Oct 2025, Bejjani et al., 2018).
As receding horizon methods permeate fields beyond engineering (e.g., economics (Chitraganti et al., 2014), operations research, distributed monitoring (Welikala et al., 2020)), principled design of horizon logic, constraint handling, and feedback integration remains an active area of research, with ongoing work bridging optimal control, artificial intelligence, and online learning.