Papers
Topics
Authors
Recent
Search
2000 character limit reached

MPC-Flow: Unified Flow and Control Framework

Updated 3 February 2026
  • MPC-Flow is a framework that unifies flow-based generative models with model predictive control to transform complex planning and inference challenges into controllable dynamical systems.
  • It employs receding-horizon and sampling-based MPC to refine trajectory distributions in real-time, ensuring constraint-aware decision-making in high-dimensional environments.
  • The bidirectional generation–refinement loop enhances scalability and robustness across applications such as image restoration, robot navigation, and social planning.

Model Predictive Control–Flow (MPC-Flow) encompasses a family of frameworks that unify normalizing flows or flow-based generative models with model predictive control (MPC) for scalable, constraint-aware decision-making and inference-time guidance across domains such as control, planning, and inverse problems. The central premise is to pose planning or conditional generation as a stochastic or deterministic control problem over learned flow dynamics, and to solve this efficiently at inference time via receding-horizon or sampling-based MPC, often in hybrid architectures combining learning and optimization.

1. Mathematical Formulation and Core Principles

MPC-Flow frameworks convert inference or planning under complex priors into a control problem over a learned (often neural ODE or normalizing flow) dynamical system. The two dominant formalisms are:

A. Optimal Control with Flow-based Priors

Given a pre-trained continuous normalizing flow defining x(t)x(t) via

x˙(t)=vθ(x(t),t),x(0)p0,\dot{x}(t) = v_\theta(x(t), t), \quad x(0) \sim p_0,

the conditional generation or restoration problem (e.g., given noisy measurement y=A(x)+ηy = A(x) + \eta) becomes:

minu()J(u):=01u(t)2dt+λΦ(x(1)),\min_{u(\cdot)} J(u) := \int_0^1 \|u(t)\|^2 dt + \lambda \cdot \Phi(x(1)),

subject to x˙(t)=vθ(x(t),t)+u(t)\dot{x}(t) = v_\theta(x(t), t) + u(t). Φ(x)\Phi(x) is a terminal cost encoding data fidelity or task reward (Webber et al., 30 Jan 2026).

B. Variational Inference with Conditional Normalizing Flows

For stochastic control, the distribution over optimal trajectories is characterized via a posterior p(Ux0,o=1)p(U|x_0, o=1), approximated by a conditional flow-based variational distribution qϕ(Ucontext)q_\phi(U \mid \text{context}) mapping Gaussian noise through context-conditional invertible bijections (Power et al., 2022, Mizuta et al., 2 Aug 2025).

C. Combined Learning-Optimization Architectures

Planning can be decomposed into:

  1. Learning candidate trajectory distributions via conditional flows (CFM/normalizing flow).
  2. MPC-based refinement using sampled trajectories and cost-weighted selection or perturbation. These are coupled via bidirectional information flow, such as using the optimized trajectory to warm-start the next generative cycle (Mizuta et al., 2 Aug 2025).

2. MPC-Flow Algorithms and Variants

A. Receding-Horizon Control in Flow-Based Inverse Problems

The global optimal control is decomposed into a sequence of finite-horizon subproblems (receding horizon control, RHC). At each iteration, one solves:

  • A KK-step subproblem over horizon HH, applies the first control, advances the state, and re-plans.
  • Special case: K=1K=1 recovers a memory-efficient one-step MPC variant.

B. FlowMPPI and Reward-Guided Conditional Flow Matching

For stochastic MPC:

  • The controller mixes trajectory samples drawn from a learned conditional normalizing flow qϕ(Ucontext)q_\phi(U \mid \text{context}) and traditional Gaussian perturbations (MPPI).
  • Weights are computed via softmax of incurred costs plus possible latent-space penalties.
  • At each iteration, the nominal control sequence is updated as a cost-weighted mean (Power et al., 2022, Mizuta et al., 2 Aug 2025).

C. Out-of-Distribution (OOD) Environment Adaptation

When the environment context (e.g., SDF, obstacles) is OOD, a projection step (gradient-based optimization over the environment latent embedding hh) is performed to find a representation h^\hat h close to the training distribution while still producing high-quality, feasible controls (Power et al., 2022).

D. Bidirectional Generation–Refinement (Unified MPC-Flow)

The generation stage samples controls from the flow model, while the refinement stage (MPPI) samples around these using planning costs. Feedback occurs by mapping the refined control back to the generative model’s latent space, forming a closed loop (Mizuta et al., 2 Aug 2025).

3. Implementation Architectures and Pseudocode

A. Flow-Based Model Structure

  • Flows: Composed of neural coupling layers (e.g., Real NVP), context vectors from encoded environment/state/goal features.
  • Environment encoding: High-dimensional environments (SDF grids/semantic maps) compressed using VAE, latent vector hh used in flow conditioning (Power et al., 2022).

B. Pseudocode Summary for FlowMPPI

Training:

For each data triple (E,x0,xG)(E, x_0, x_G):

  1. Encode context C=gω(x0,xG,h)C = g_\omega(x_0, x_G, h) with hqθ(hE)h \sim q_\theta(h|E).
  2. Generate KK flow-based control samples UkU_k via fϕ(Zk;C)f_\phi(Z_k; C).
  3. Compute costs J(τk)J(\tau_k), weights wkw_k, and update flow parameters via the sample-based “free energy” surrogate.

Inference (in-distribution):

  1. Half of KK samples from MPPI, half from the learned flow.
  2. Combine, weight, and update nominal control.

Inference (OOD environments):

Before sampling, optimize hh to minimize a combination of OOD penalty (logpψ(h)-\log p_\psi(h)) and flow cost; then proceed as above (Power et al., 2022).

Unified Generation-Refinement Loop:

1
2
3
4
5
6
7
8
9
10
11
for n in range(N_steps):
    c = [x_n, x_goal, past_controls]
    # Sample K candidates from CFM
    {u_k} = RewardGuidedCFM(c)
    u_bar = average({u_k})
    # MPPI refinement
    u_star = MPPI(u_bar)
    apply u_star[0] to environment
    # Update: backproject u_star to latent, warm-start next CFM
    z_warm = backproject(u_star)
    past_controls = shift_and_append(past_controls, u_star[0])
(Mizuta et al., 2 Aug 2025)

4. Empirical Performance and Benchmark Results

A. Inverse Problems (Image Restoration)

  • On CelebA, MPC-Flow achieves competitive PSNR/SSIM on denoising, deblurring, super-resolution, and inpainting, e.g., for denoising (σ=0.2): MPC-Δt\Delta t: 31.55 PSNR/0.877 SSIM; PnP-Flow: 32.45/0.911; FlowGrad: 26.07/0.777; OC-Flow: 19.39/0.559.
  • Large scale: With a quantized FLUX.2 (32B param.), K=1 RHC enables inference on consumer GPUs (peak VRAM <24 GB, ≲30 s per image) (Webber et al., 30 Jan 2026).

B. Robot Navigation (FlowMPPI)

  • Double-integrator (planar): In-distribution success ~0.99; OOD increases from 0.65 (K=256) to 0.87 (K=1024) for FlowMPPIProject, outperforming iCEM.
  • Quadrotor (12 DoF, 3D): In-distribution ~0.98; OOD 0.71→0.93 (K=256→1024) vs. 0.35→0.63 (iCEM).
  • Real-world quadrotor: 97% success in "rooms", 85% in "stairway", surpassing standard MPPI/iCEM (Power et al., 2022).

C. Social Navigation (Unified Generation–Refinement)

  • On UCY (unicycle), SDD, and simulated crowd datasets, CFM–MPPI achieves near-zero collision rates and minimum reach error, with lower smoothness costs and real-time operation (0.076 s per step), outperforming ablated baselines (Mizuta et al., 2 Aug 2025).
Domain Task Key Metric MPC-Flow Result Baseline
Image restoration Denoising PSNR/SSIM 31.55/0.877 (Δt-MPC) 26.07/0.777 (FlowGrad)
Navigation Quadrotor OOD Success Rate 0.93 (K=1024, OOD FlowMPPIProj) 0.63 (iCEM)
Social Nav (UCY) Collision % 0.0 (CFM-MPPI) 0.67 (MPPI)

5. Safety, Constraint Handling, and OOD Adaptation

MPC-Flow architectures embed safety via dual strategies:

  • Soft constraint encoding: Control barrier functions (CBF) reward penalties or barrier terms in MPPI cost, ensuring trajectory samples are repelled from unsafe regions (Mizuta et al., 2 Aug 2025).
  • Projection for OOD handling: Online adaptation of environment representations in latent space is performed to improve both distributional likelihood under the VAE prior and realized control performance (Power et al., 2022).

This provides coverage of both model-based and data-driven (flow-based) notions of robustness and adaptive planning.

6. Algorithmic Trade-Offs and Scalability

  • Memory and Compute: Full optimal control requires differentiation through entire flow trajectories (O(NN) memory). MPC-Flow admits K=1 single-step variants requiring only O(1) memory, essential for inference with large diffusion/flow models (e.g., FLUX.2 32B) (Webber et al., 30 Jan 2026).
  • Backpropagation: Multi-step (K>1) RHC variants enable lookahead and better planning at expense of more memory; K=1 single-step MPC variants are more scalable but may struggle under extreme ill-posedness unless value-function approximation is incorporated.
  • Bidirectional warm-start: Coupling generation and optimization accelerates convergence and enhances reactivity to non-stationary or dynamic environments (Mizuta et al., 2 Aug 2025).

7. Limitations, Extensions, and Future Directions

Current limitations include:

  • Requirement for differentiable terminal costs and constraints: non-differentiable safety can only be handled via surrogates (Webber et al., 30 Jan 2026).
  • Difficulty with extreme null spaces in inverse operators, where single-step MPC cannot “fill in” poorly observed regions, suggesting the need for multi-step or value-learning augmentation.
  • OOD generalization is contingent on latent projection and its optimization landscape; high-dimensional or topologically novel scenarios may present challenges (Power et al., 2022).

Proposed future research directions include:

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to MPC-Flow.