Deterministic Optimal Transport
- Deterministic optimal transport is a framework that maps measures via specific functions, based on the Monge problem and convex analysis.
- It leverages convex optimization and PDE methods to reformulate transport problems, ensuring unique and stable solutions.
- Discrete formulations enable robust numerical algorithms applicable to data science, physics, and computational geometry.
Deterministic optimal transport refers to the class of transport problems in which the mapping from source to target is realized via functions or maps, rather than general couplings or stochastic kernels. The central object is the Monge problem of finding a deterministic push-forward map that transfers a source measure to a target measure while minimizing an average cost functional, typically of the form . Under regularity assumptions, this minimization yields a unique and well-structured solution grounded in convex analysis and PDE theory. Deterministic formulations have deep connections to nonlinear elliptic partial differential equations, convex optimization, and geometric measure theory; the resulting theories underpin a host of modern numerical algorithms and applications in mathematics, physics, and data science.
1. Classical Formulation and Convex Reformulation
The quadratic-cost deterministic optimal transport problem seeks , where is a measurable map pushing on a source to on a convex target with equal mass (Lindsey et al., 2016). Brenier's theorem establishes that for a convex potential solving the Monge-Ampère equation
with . Lindsey–Rubinstein demonstrate that enforcing the Monge-Ampère equation softly, via penalization of violations, leads to a convex variational inequality on the space of convex functions. Two convex functionals, and , can be written in terms of log-determinant or power penalties on the Hessian, making the infinite-dimensional problem amenable to convex optimization whenever is log-concave.
2. Discretization and Algorithmic Solution
Discretization proceeds by triangulating into simplices, introducing variables and . Discrete convexity constraints , alongside enforcement of , guarantee global convexity. On each simplex, the discrete Jacobian
approximates the local Hessian. The finite-dimensional convex program ( LDMAOP or power variant) aggregates these over all simplices with penalties reflecting volume-preservation violations. Provided is convex, the discrete problem remains a cone-convex optimization task, solvable by modern interior-point solvers such as YALMIP + MOSEK (Lindsey et al., 2016).
3. Convergence Analysis and Numerical Properties
Suppose a sequence of triangulations is refined and solutions to the discretized program are obtained. The piecewise-affine convex reconstructions converge uniformly to the unique Brenier potential if the discrete Monge-Ampère "error" vanishes as mesh diameter and the solution is feasible (Proposition 3.4). Under regularity (, ), the DMAOP cost decays as and one obtains pointwise convergence of gradients. Numerically, the program exhibits first-order accuracy in two dimensions and robust stability even on highly nonconvex domains.
4. Unifying Perspective on Monge-Ampère PDE Numerical Methods
Any discrete method producing approximate convex potentials , and whose discrete Hessian determinants converge appropriately, enforces the volume-preservation required of optimal maps (Lindsey et al., 2016). DMAOP, wide-stencil schemes, and Oliker–Prussner methods all fit this meta-framework: they penalize contraction in volume or violations of the determinant, and thereby induce convergence to the continuous Brenier solution. The nonnegative penalty on excess contraction in DMAOP provides both a consistency metric and an actionable stopping criterion for convex optimization methods.
5. Visualization, Dynamic OT, and Practical Considerations
Static maps computed by DMAOP can be visualized directly in terms of the convex potential surface or via the pointwise mapping . Dynamic displacement interpolation, for , simulates the transport of mass along geodesics in the Wasserstein space. Numerical experiments demonstrate efficient solution and visualization for several domain and source–target configuration types—with convex, nonconvex, and highly irregular supports—all with robust global accuracy. Runtime scales quadratically with the number of variables due to the number of linear constraints.
6. Extensions and Generalizations
Lindsey–Rubinstein's convergence framework suggests a wider applicability to most Monge-Ampère–based numerical methods for optimal transport that enforce convexity and volume preservation via discrete constraints. The convex programming approach accommodates arbitrary log-concave target densities, convex supports, and supplies actionable error bounds under mesh refinement. This methodology is central to theoretical and applied advances in optimal transport, facilitating robust computation of deterministic maps in high-dimensional and irregular settings, and supporting both static and dynamic transport visualizations.
7. Context and Significance
The deterministic map-based theory is foundational in Coulomb transport, convex cost barycentric transport, generalized Monge–Ampère–type problems, and connections to dynamic programming and mass transport in stochastic thermodynamics. The reduction of the Monge-Ampère equation from a nonlinear PDE to a convex optimization problem enables the development of scalable, accurate, and theoretically rigorous algorithms (Lindsey et al., 2016). Deterministic optimal transport underlies key advances in data science, computational geometry, and mathematical physics, serving as a backbone for modern OT solvers and analysis pipelines.