Receding-Horizon Nonlinear MPC
- Receding-horizon nonlinear MPC is an advanced control strategy that solves a finite-horizon nonlinear optimal control problem at each step to flexibly manage constraints and feedback.
- Recent developments such as relaxed Lyapunov inequalities and multi-step performance indices provide explicit stability and suboptimality guarantees without relying on traditional terminal constraints.
- Algorithmic innovations including dynamic update schedules and slack mechanisms reduce prediction horizon requirements and computational load while maintaining robust closed-loop performance.
Receding-horizon nonlinear model predictive control (MPC) is an advanced control strategy in which at each time step a nonlinear finite-horizon optimal control problem is solved, and the solution is applied in a rolling (receding) fashion. Compared to linear or open-loop designs, receding-horizon nonlinear MPC yields flexible feedback mechanisms, can systematically handle state and control constraints, and is compatible with general nonlinear system dynamics. The key technical challenge has historically been guaranteeing closed-loop stability and performance in the absence of terminal constraints or terminal penalties, while maintaining computational tractability with short prediction horizons. Recent theoretical and algorithmic developments—including relaxed Lyapunov inequalities, aggregated multi-step suboptimality indices, and “robustified” update schedules—now offer explicit stability and suboptimality guarantees even without terminal constraints or conservative terminal ingredients.
1. Relaxed Lyapunov Inequality and Stability Guarantees
A central mechanism for stability analysis is the relaxed Lyapunov inequality based on the finite-horizon value function . The classical Lyapunov decrease condition is replaced with the requirement that for some and receding-horizon feedback law
for all time indices . Here, denotes the stage cost, and is a suboptimality index guaranteeing strict monotonic decrease of proportional to the instantaneous cost. If upper and lower -type bounds on as well as similar stage cost bounds are satisfied, this relaxed inequality ensures global asymptotic stability of the closed-loop system. The approach allows the extension to multi-step relaxations: if applies elements of the optimized sequence before updating, the aggregated condition
is enforced. This flexible structure provides a practical tool to address the case where the standard one-step condition is too conservative or fails at some points. In aggregated multi-step conditions, a positive suboptimality index can be retained by increasing (i.e., updating less frequently), thus maintaining stability.
2. Suboptimality and Performance Estimates
The receding-horizon nonlinear MPC framework yields precise performance guarantees by comparing the finite-horizon closed-loop cost to the infinite-horizon optimum. Whenever the relaxed Lyapunov inequality holds, the following bounds are available at each time step :
where is the true infinite-horizon value function, and is the infinite-horizon cost of the receding-horizon closed loop. The sandwiching of the actual cost between and the scaled optimum quantifies explicit suboptimality, with determined by the Lyapunov decrease condition. A similar aggregated performance index applies in the multi-step updating case, with
governing the performance lower bound. Performance and suboptimality are thus explicitly parameterized by the decrease in over the implemented open-loop segment, rather than the total prediction horizon length.
3. Algorithm Design and Computational Aspects
A major obstacle in classical nonlinear MPC is the exponential growth of computational burden with the prediction horizon . The relaxation techniques above enable the use of shorter without losing stability guarantees. Algorithmically, at each receding-horizon iteration, the algorithm may:
- Search for the minimal such that holds;
- If no suitable can be found, engage an exit strategy that tolerates temporary violations and accumulates a slack variable
The slack tracks cumulative deviations from desired decrease and can be used to monitor, correct, or terminate “recovery” episodes in subsequent iterations.
Additionally, if aggregated conditions are not met, the feedback law can “robustly” update at intermediate steps (with ), resetting the candidate control sequence to the updated open-loop plan. This hybridization further reduces conservatism and permits adaptive, event-driven re-optimization schedules.
4. Receding-Horizon Feedback Law Construction
The receding-horizon feedback law is generalized beyond the classical “apply only the first optimized control” paradigm. At each update instant , the controller applies and implements the first elements of . The update list , with gaps , is determined dynamically by the relaxed Lyapunov decrease check.
Intermediate “robustification” further allows, at any intermediate , a switching to refreshed control sequences as follows:
This schedule facilitates both dwell-time–like and event-triggered update mechanisms, tightly integrating controller update frequency with observed closed-loop performance.
5. Conservatism, Slack Mechanisms, and Reconciliation with Numerical Observations
While theoretical arguments based on pointwise Lyapunov decrease are conservative, numerical examples demonstrate that even when the standard decrease condition is violated locally (yielding temporary negative ), the closed-loop trajectories often remain stable and converge to the equilibrium. The accumulated slack provides an algorithmic means to “track” these violations and accept temporary relaxations as long as the long-run trend or is maintained. Numerous simulations show that, compared to classical analysis, the new approach reduces the required prediction horizon (e.g., from to ), thus substantiating the effectiveness of short-horizon, slack-augmented MPC. The exit strategy and alternative multi-step suboptimality indices close the “gap” between conservative theory and favorable empirical behavior.
6. Mathematical Summary and Key Formulas
The core mathematical tools underpinning receding-horizon nonlinear MPC without terminal constraints in this framework are:
- Relaxed Lyapunov (one-step):
- Relaxed Lyapunov (multi-step):
- Performance bounds:
- Slack accumulation:
The feedback laws, update rules, and slack mechanism are integrated into algorithmic approaches (see Algorithms 1 and 2 in the source), permitting frequent or infrequent updates, event-driven robustification, and real-time correction of slack-induced transients.
7. Significance and Impact
Receding-horizon nonlinear MPC with relaxed Lyapunov–based guarantees, as posed in this research, bridges the gap between theory and practice by:
- Allowing stability and suboptimality guarantees with reduced (or no) terminal constraints, thereby improving computational tractability;
- Offering a flexible implementation that can adaptively choose update intervals, making it robust to transient violations and suboptimality gaps;
- Enabling high-performance control in genuinely nonlinear settings with complex constraints, as demonstrated by reductions in required prediction horizon and confirmed by numerical studies.
This methodology sets a foundation for further developments in nonlinear MPC—particularly in high-dimensional or real-time applications—where the elimination of terminal constraints and the efficient use of short horizons are vital for feasibility and implementability.