Papers
Topics
Authors
Recent
2000 character limit reached

Dynamic State Predictive Control (DSPC)

Updated 2 December 2025
  • DSPC is a framework that generalizes MPC to manage dynamic, time-varying, and nonstationary objectives in classical and learning-based control systems.
  • It embeds finite-horizon OCPs into dynamic feedback laws using primal-dual dynamics to ensure constraint satisfaction, stability, and recursive feasibility.
  • DSPC leverages data-driven state-space models and deep learning techniques to enable applications such as image super-resolution and multimodal sensor fusion.

Dynamic State Predictive Control (DSPC) is a framework that generalizes Model Predictive Control (MPC) to address dynamic, time-varying, and nonstationary objectives in both classical control and modern learning-based domains. In its canonical form, DSPC embeds finite-horizon optimal control problems into dynamic feedback laws, leveraging both real-time optimization and dynamic system theory to ensure constraint satisfaction, stability, and robust performance under time-varying or uncertain environments. Recently, DSPC has been extended to data-driven state-space models in deep learning, supporting applications such as image super-resolution and multimodal sensor fusion.

1. Mathematical Foundations and Formulation

DSPC is defined by a receding-horizon optimal control problem (OCP) solved in closed-loop with the plant. For continuous-time, linear time-invariant (LTI) systems, the plant dynamics and output are given by

ξ˙=Acξ+Bcν,ψ=Ccξ+Dcν,\dot \xi = A_c\,\xi + B_c\,\nu, \qquad \psi = C_c\,\xi + D_c\,\nu,

with convex constraints on state ξ\xi and input ν\nu defined by sets X,U\mathcal{X},\mathcal{U} (Nicotra et al., 2017). The standard OCP seeks a sequence (x0,,xN,u0,,uN1)(x_0,\ldots,x_N,u_0,\ldots,u_{N-1}) minimizing a cumulative cost,

mink=0N1τ(xkξˉr,ukνˉr)+ϕ(xNξˉr),\min \sum_{k=0}^{N-1} \tau\,\ell(x_k-\bar{\xi}_r, u_k-\bar{\nu}_r) + \phi(x_N-\bar{\xi}_r),

subject to discretized system dynamics, constraints, and a terminal set. Here, ξˉr,νˉr\bar{\xi}_r,\bar{\nu}_r are admissible equilibria for an auxiliary reference rr.

DSPC discretizes model evolution with a finite step τ\tau, constructs a primal-dual formulation incorporating Lagrange multipliers for equality and inequality constraints, and then "embeds" this OCP solution as the equilibrium of a continuous-time primal-dual dynamic system. The controller states (z,λ,μ)(z,\lambda,\mu) thus evolve as

{z˙=αzL, λ˙=+α(Gzg(ξ)), μ˙=+α[h(z)PN(h(z),μ)],\begin{cases} \dot z = -\alpha\nabla_z L, \ \dot\lambda = +\alpha(Gz - g(\xi)), \ \dot\mu = +\alpha [h(z) - P_N(h(z), \mu)], \end{cases}

where LL is the OCP Lagrangian, α\alpha a dynamic rate, and PNP_N a projection onto the normal cone (Nicotra et al., 2017). The closed-loop system couples the plant and controller, and stability is proven using input-to-state stability (ISS) and small-gain analysis.

In learning-based DSPC, the framework is generalized to allow nonlinear, data-driven models of the form

x˙(t)=f(x(t),u(t);θ),y(t)=g(x(t);θ),\dot{x}(t) = f(x(t), u(t); \theta), \qquad y(t) = g(x(t); \theta),

where all differential coefficients and mappings are parameterized and learned from data (Li et al., 22 Nov 2025). Discretization (e.g., zero-order-hold) yields time-varying linear systems with matrices Ai,Bi,Ci,DiA_i, B_i, C_i, D_i dynamically generated as functions of input features at each stage.

2. Handling Dynamic and Economic Objectives

Classical DSPC is designed to cope with dynamically changing or infeasible setpoints by (a) allowing references to become optimization variables ("artificial references") and (b) directly encoding time-varying or economic objectives in the stage cost (Köhler et al., 2023). The OCP may thus jointly optimize the control inputs U=(u0k,,uN1k)U = (u_{0|k},\ldots,u_{N-1|k}) and auxiliary references R=(r0k,,rNk)R = (r_{0|k},\ldots,r_{N|k}), minimizing a combined objective that penalizes both control performance and deviation from desired targets: minU,R(x,u;)+Vf(xN,RN)+Vo(Ri,target),\min_{U,R}\, \sum \ell(x,u;\cdot) + V_f(x_N, R_N) + \sum V_o(R_i, \mathrm{target}), where VoV_o is a penalty function. This ensures the OCP remains feasible even when targets are unreachable or continuously shifting, a necessity in economic MPC and process control.

DSPC can also embed economic optimization objectives directly as indefinite stage costs, providing performance bounds relative to the best attainable steady-state or periodic policy.

3. Algorithmic Realizations and Solution Methods

The principal algorithms for DSPC fall into two classes: online optimization–based schemes and dynamically embedded primal-dual flows.

  • Online Optimization (MPC-style): At each sampling instant, the (possibly nonlinear) OCP is solved online using methods such as real-time Sequential Quadratic Programming (RTI-SQP), interior-point NLP solvers, or explicit multi-parametric QP for low-dimension problems. Warm-starting and horizon-shifting are standard for real-time feasibility (Köhler et al., 2023).
  • Dynamic Embedding (Primal–Dual Flows): Rather than computing the optimizer at each step, DSPC realizes the MPC control law as the equilibrium of a fast, continuous-time primal–dual differential equation driven by the current state. The "controller" itself is a dynamic system whose performance depends on tuning parameters such as the integration rate α\alpha with respect to the plant response timescale (Nicotra et al., 2017).

In learning applications, DSPC is implemented as a differentiable, time-varying state-space model, where the parameters of each state-update (e.g., Ai,Bi,CiA_i, B_i, C_i) are generated by neural modules (CNN or MLP), and the entire system is trained end-to-end by gradient-based optimization (Li et al., 22 Nov 2025).

4. Dynamically Constrained Closed-Loop Stability and Feasibility

DSPC guarantees recursive feasibility and closed-loop asymptotic stability even when references move or constraints are activated. Key mechanisms include:

  • Input-to-State Stability (ISS): Stability proofs rely on small-gain theorems, showing that the loop-gain γ1γ2/α\gamma_1\gamma_2/\alpha (plant and controller ISS gains, controller rate) is tunable to ensure asymptotic convergence, provided the controller internal dynamics are fast relative to the plant (Nicotra et al., 2017).
  • Explicit Reference Governor (ERG): To address infeasible initial conditions and enlarged domain of attraction, DSPC can be augmented with an ERG, an auxiliary dynamic system that adapts the reference r(t)r(t) to guarantee recursive feasibility. The ERG law ensures that the controller's prediction terminal state always remains in the admissible set, driving r(t)r(t) toward the desired target at a rate controlled by safety margins and maximum permitted speed (Nicotra et al., 2017).
  • Artificial References for Robustness: By including artificial references as decision variables and penalizing their deviation from targets, DSPC ensures that the optimization remains feasible and receding-horizon stability holds for general dynamic or unreachable goals (Köhler et al., 2023).

5. Extensions to Learning and Multimodal Fusion

In data-driven settings, DSPC is reinterpreted as a learnable, time-varying state-space sequence model. Notable techniques include:

  • Nonlinear Parameterization: DSPC generalizes fixed linear state-space models to nonlinear, data-adaptive dynamics, where each transition matrix Ai,Bi,CiA_i,B_i,C_i is itself a learned function of the input at that stage. This increases the ability of the model to capture complex temporal or spatial evolution, critical in applications such as image super-resolution (Li et al., 22 Nov 2025).
  • State Cross-Control: For multimodal fusion, e.g., combining multispectral and panchromatic inputs, DSPC allows one modality to generate the control parameters governing the state evolution of another, achieving richer and more flexible data fusion architectures (Li et al., 22 Nov 2025).
  • Progressive Transitional Learning: The model does not abruptly map input images to latent dynamics, but rather mixes outputs from several upsampling bases (bicubic, convolutional, learned degradation) at each layer, with learnable weights. This curriculum approach improves stability and mitigates error accumulation at high degradation factors (Li et al., 22 Nov 2025).

6. Practical Applications and Numerical Examples

Classical DSPC has been successfully demonstrated on constrained regulation and tracking tasks, such as:

  • Double Integrator: For state- and input-constrained motion, DSPC and DSPC+ERG both steer the system to the goal while ensuring constraint satisfaction, with ERG guaranteeing feasibility when deceleration is insufficient using only the plain controller (Nicotra et al., 2017).
  • Spacecraft Relative Motion: With tight position and thrust constraints, DSPC+ERG navigates the multi-state system to targets in high-dimensional spaces, maintaining feasibility and near-optimal control through internal dynamics and reference adaptation (Nicotra et al., 2017).

In learning, DSPC (as in MambaX) achieves state-of-the-art image super-resolution by robustly modeling intermediate state transitions and supporting advanced data fusion and progressive domain adaptation (Li et al., 22 Nov 2025).

7. Limitations, Assumptions, and Future Directions

The DSPC paradigm assumes model fidelity for system or feature-lifted dynamics, convexity for feasibility guarantees in optimization, and sufficient regularity for stability analytic results. Limitations include memory scaling with the number and size of time-varying matrices, challenges with modeling sharp non-smooth transitions in data-driven regimes, and potential underfitting for strong nonlinearity if insufficient model capacity is provided (Li et al., 22 Nov 2025).

Potential extensions comprise incorporating stochasticity in state-space evolution, extending to video and spatiotemporal control, robustifying with diffusion/flow priors, and learning state covariance for uncertainty quantification. In all cases, the flexibility of DSPC allows unification of tracking, economic, and learning-based predictive control under a dynamic, constraint-aware, and recursively feasible framework (Nicotra et al., 2017, Köhler et al., 2023, Li et al., 22 Nov 2025).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Dynamic State Predictive Control (DSPC).