Multiobjective MPC
- Multiobjective Model Predictive Control (MOMPC) is a framework that solves finite-horizon optimal control problems with multiple conflicting objectives.
- It employs scalarization and Pareto analysis to balance criteria such as energy efficiency, safety, and performance in real-time applications.
- Algorithmic methods decompose computation into offline library generation and online interpolation to ensure robust, stable control under uncertainties.
Multiobjective Model Predictive Control (MOMPC) extends the classical model predictive control paradigm to simultaneously address multiple, typically conflicting, performance criteria within the receding horizon optimal control framework. Rather than reducing the multiobjective problem to a single scalarized objective, MOMPC explicitly formulates, computes, and implements feedback control actions that negotiate optimal compromises among objectives such as setpoint tracking, energy consumption, constraint violation, robustness, comfort, or economic cost. Modern approaches combine algorithmic, theoretical, and computational advances, yielding robust, real-time compatible schemes capable of handling nonlinearities, uncertainties, structural design tradeoffs, and user-adaptive preferences (Ober-Blöbaum et al., 2018, Castellanos et al., 2020, Herrmann-Wicklmayr et al., 27 Oct 2025, Niepötter et al., 15 Nov 2025).
1. Multiobjective MPC Problem Formulation
MOMPC operates by solving, at each control sampling time, a finite-horizon Optimal Control Problem (OCP) with a cost function vector: subject to system dynamics, state/input constraints, and possibly parameter uncertainties: The multiobjective OCP seeks the set of Pareto-optimal control functions for which is not dominated componentwise by any other admissible . In the presence of uncertainty (e.g., in initial conditions), robust MOMPC formulations utilize a min–max set-based Pareto efficiency: where is a compact uncertainty set (Castellanos et al., 2020). Control synthesis can target the efficient frontier or specific compromise solutions selected by the user/decision-maker at runtime.
2. Scalarization and Decision-Making Mechanisms
Since the Pareto front is generally uncountable, practical MOMPC implementations employ scalarization methods that translate high-level preferences into single-objective OCPs whose optimizers are Pareto-efficient points:
- Weighted sum:
- -constraint: Minimize , subject to
- Reference point/achievement scalarization: Minimize the (Hausdorff) distance of to an aspiration point (Castellanos et al., 2020, Ober-Blöbaum et al., 2018, Herrmann-Wicklmayr et al., 27 Oct 2025)
- Individual minima (IM)-informed methods: Six methods outlined in (Herrmann-Wicklmayr et al., 27 Oct 2025), including standard weighted sum, knee-point, and multiple Pascoletti–Serafini variants, leverage the individual minima points and their convex hull.
- Multi-horizon mission-based scalarization: For path planning with abort-plan safety, simplex-weighted cost vectors encode both primary and backup mission objectives, dynamically adapted online (Kim et al., 2021).
Proper scalarization ensures that each optimizer corresponds to a true Pareto point for convex frontiers. Nonconvexity may necessitate nonlinear or reference-point methods to recover the full frontier (Ober-Blöbaum et al., 2018).
3. Algorithmic Frameworks: Offline/Online Decomposition and Real-Time Implementation
To achieve tractable, real-time MOMPC, many methods split the computational burden:
- Offline phase: Compute grids/libraries of Pareto-optimal control actions for representative parameter/initial condition scenarios. Exploit symmetry groups whenever possible to reduce the parameter space (Ober-Blöbaum et al., 2018, Castellanos et al., 2020, Peitz et al., 2016). Scalarization over preference weights or reference-point sweeps is used to densely sample the Pareto front at each grid node.
- Online phase: At each control instant:
- Measure current state and context, map to nearest (reduced) grid points.
- Retrieve corresponding Pareto libraries. For a given preference or reference , select or interpolate a candidate control.
- Optionally perform a lightweight online scalarized OCP refinement for improved optimality or to handle off-grid conditions (Castellanos et al., 2020).
- Apply the first control segment, shift the horizon, repeat.
This paradigm enables real-time feasibility with lookup and interpolation (typical online costs 50 ms) while maintaining full multiobjective flexibility (Peitz et al., 2016, Ober-Blöbaum et al., 2018).
4. Theoretical Guarantees: Feasibility, Stability, and Dissipativity
Rigorous analysis of MOMPC considers feasibility (recursive constraint satisfaction), stability (closed-loop convergence), and cost improvement/performance for all objectives:
Strict dissipativity: If each stage cost is strictly dissipative, then convex combinations are strictly dissipative under mild conditions, leading to closed-loop stability for any trade-off (Grüne et al., 2022).
- Relaxed requirements: Asymptotic stability can be ensured under strict dissipativity for only one of the objectives, provided compatible terminal costs and proper selection of efficient controls at each iteration (Eichfelder et al., 2022).
- Constrained descent: Enforcing a per-step decrease of each objective along the closed-loop trajectory, or a subset thereof, ensures monotonic improvement and stability (Herrmann-Wicklmayr et al., 27 Oct 2025, Nair et al., 2024).
- Pareto convergence in learning-based MOMPC: Iterative data-driven MPC schemes with convex cost objectives and monotonicity constraints guarantee convergence to the Pareto front of the infinite-horizon multiobjective problem (Nair et al., 2024).
- For switched or hybrid MOMPC, multiple Lyapunov functions (one per objective/mode) can be employed to ensure global asymptotic or input-to-state stability by enforcing margin or decrease constraints at each switching event (Niepötter et al., 15 Nov 2025).
5. Example Applications and Computational Profiles
MOMPC methodologies have been applied across a range of domains:
- Autonomous driving: Conflicting objectives such as speed-vs-safety, with Pareto set computation over a grid of car states and track/geometric symmetries. Full nonlinear EMOMPC, offline libraries with parallel computation (Ober-Blöbaum et al., 2018, Castellanos et al., 2020).
- Electric vehicle energy management: Energy consumption vs. travel time objectives, with scenario-based state grids, explicit Pareto cataloguing, and real-time preference adaptation (Peitz et al., 2016).
- Residential demand response: Minimization of energy costs and user dissatisfaction under uncertainty, using Laguerre parameterization and a constrained evolutionary algorithm for feasible search (Lin et al., 13 Jan 2026).
- Vehicle guidance and MPC parameter tuning: Multiobjective Bayesian optimization to tune cost function weights for path-following controllers subject to comfort, accuracy, and speed metrics (Gharib et al., 2021, Zarrouki et al., 2024).
- Safety-critical planning: Multi-mission (primary/backup) MOMPC with mission weights and multi-horizon trajectory simulation for contingency guarantees (Kim et al., 2021).
Parallel computing resources (multi-core/GPUs) are frequently leveraged for the offline Pareto set computation. Online costs are generally dominated by interpolation and scalarized OCP solves, and remain compatible with hard real-time constraints (Ober-Blöbaum et al., 2018, Castellanos et al., 2020).
6. Trade-Offs, Performance Evaluation, and Design Optimization
MOMPC exposes inherent trade-offs between conflicting objectives, often visualized as Pareto fronts:
- Data-driven or Bayesian optimization frameworks can efficiently explore the space of MPC weights/design parameters to yield a catalog of non-dominated closed-loop performances (Gharib et al., 2021, Zarrouki et al., 2024, Bachtiar et al., 2016).
- Specialized search/optimizer algorithms (e.g., DITRI, expected-hypervolume-improvement BO) facilitate global coverage of the Pareto frontier, addressing both continuous and discrete parameter variables (Bachtiar et al., 2016, Gharib et al., 2021).
- Explicit attention is given to runtime adaptivity: RL-based agents can select among pre-optimized Pareto-optimal weights, yielding adaptive “weight-varying” MPCs with provable closed-loop safety and the potential for super-Pareto performance via online learning (Zarrouki et al., 2024).
- Key metrics include constraint satisfaction, closed-loop cost for each objective, real-time feasibility, and robustness to disturbances or uncertainties.
7. Limitations, Open Problems, and Future Directions
Outstanding challenges include:
- Scalability: Offline grid complexity is exponential in the effective parameter dimension; symmetry reduction is essential but not always exploitable (Castellanos et al., 2020, Ober-Blöbaum et al., 2018).
- Uncertainty handling: Most robust MOMPC schemes focus on deterministic uncertainty (e.g., initial state), with extensions to general stochastic or time-varying uncertainties still an active area (Castellanos et al., 2020, Lin et al., 13 Jan 2026).
- Nonconvexity of Pareto fronts: Nonconvex Pareto sets require nonlinear scalarization or reference-point methodologies for full coverage (Ober-Blöbaum et al., 2018, Herrmann-Wicklmayr et al., 27 Oct 2025).
- Stability theory for IM-informed and data-driven schemes: While guarantees have been established under certain monotonicity and descent conditions (Herrmann-Wicklmayr et al., 27 Oct 2025, Nair et al., 2024), generalization to fully nonlinear or strongly coupled problems remains open.
- Online computational complexity: The balance between Pareto flexibility, real-time feasibility, and robustness is an ongoing design consideration; adaptive and learning-based trade-off management remains an area of methodological innovation (Zarrouki et al., 2024).
MOMPC continues to advance as a central paradigm for cyberphysical systems where complexity, safety, and competing objectives must be negotiated in a transparent, high-assurance, and real-time compatible manner.