Sinkhorn-Approximated Losses
- Sinkhorn-approximated losses are a class of optimal transport-based loss functions that use entropic regularization to produce differentiable and statistically robust metrics.
- They leverage fixed-point Sinkhorn iterations to efficiently approximate the regularized OT cost, ensuring convergence of the dual potentials under marginal constraints.
- Advanced variants extend this framework to online, partial transport, and generalized settings, enabling applications in generative modeling, robust optimization, and distributionally robust learning.
Sinkhorn-approximated losses are a class of optimal transport-based loss functions incorporating entropic regularization to achieve tractable, differentiable, and statistically robust optimization over probability distributions. These losses interpolate between optimal transport (OT) metrics such as Wasserstein distance and kernel-based alternatives like maximum mean discrepancy (MMD), and form the computational backbone of modern generative modeling, robust optimization, and related machine learning frameworks.
1. Mathematical Foundations of Sinkhorn Losses
Sinkhorn-approximated losses originate from the regularized OT problem. Given two probability measures and , and a cost matrix with entries , the entropic OT cost is
where is the set of admissible couplings.
The Sinkhorn divergence, used to debias entropic OT and ensure metric properties, is
Tuning interpolates between the geometric OT regime () and the MMD regime (), where the coupling approaches the independent product and degenerates to an energy or kernel-based divergence.
2. Computational Algorithms: Sinkhorn Iterations
The central computational procedure for evaluating Sinkhorn losses is a fixed-point iteration referred to as the Sinkhorn algorithm. Defining the Gibbs kernel , the optimal plan is sought as subject to marginal constraints encoded by the iterative updates: All vector divisions are element-wise. After iterations, the coupling is , yielding an approximate cost (Genevay et al., 2017). This scheme admits robust implementation on modern GPUs with batch sizes in the range $128$–$512$ and iteration count typically $10$–$50$ for .
For generalized OT problems including partial transport or unbalanced couplings, the Sinkhorn framework adapts via proximal/divide updates and clipping/min operations on dual scalings to enforce penalized marginal constraints (Bai, 9 Jul 2024).
3. Differentiability, Gradient Computation, and Implicit Differentiation
The entropic regularization confers infinite differentiability () on both the Sinkhorn cost and Sinkhorn divergence in the simplex interior (Luise et al., 2018). Efficient gradient computation leverages either (i) backpropagation through unrolled Sinkhorn iterations, or (ii) implicit differentiation of the KKT conditions: where is the optimal plan and dual potentials. Solving the resulting sparse linear system yields vector-Jacobian products for analytic gradients with provable error bounds (Eisenberger et al., 2022). Implicit differentiation is advantageous for high or , providing superior memory efficiency over unrolled automatic differentiation (AD).
4. Statistical and Learning-Theoretic Properties
Entropic regularization produces strictly convex transport losses with improved sample complexity and variance. For small , the OT bias persists with the curse of dimensionality (); for large , the Sinkhorn loss exhibits MMD-like sample complexity () (Genevay et al., 2017). In supervised learning, the sharp Sinkhorn loss ( without explicit entropy penalty in the final cost) guarantees universal consistency and fast excess risk convergence rates under standard RKHS assumptions (Luise et al., 2018).
Recent work verifies second-order Hadamard differentiability of Sinkhorn divergences, facilitating local quadratic approximations and enabling rigorous coreset construction (Kokot et al., 28 Apr 2025). This functional smoothness underpins compressed representations and efficient subsampling schemes.
5. Advanced Variants: Online, Generalized, and Partial Transport
Algorithms such as Online Sinkhorn (Mensch et al., 2020) permit stochastic streaming estimation of Sinkhorn-approximated losses, maintaining non-parametric mixture representations of scaling potentials :
- New sample batches update the mixture weights via stochastic approximation (Robbins–Monro step),
- Theoretical guarantees yield near-optimal error rates for samples.
Generalized and partial transport models (GOPT) introduce penalty functions for mass destruction/creation and enable coupled clipping/min operations inside Sinkhorn iterates. This confers flexibility over the balanced/unbalanced spectrum by adjusting mass constraints in the primal and dual (Bai, 9 Jul 2024).
Convex regularization beyond Shannon entropy is accommodated in generalized Sinkhorn frameworks (Marino et al., 2020), including regularizers (e.g. Tsallis, quadratic), with each instantiation yielding a corresponding dual, complementary slackness, and IPFP-type iterative scaling.
6. Practical Implementation, Optimization, and Applications
In contemporary deep learning, Sinkhorn layers are integrated end-to-end:
- Cost networks compute from learned features,
- Sinkhorn iterations yield the coupling and the regularized OT loss,
- Backpropagation exploits differentiability of matrix/tensor operations.
Loss surrogate terms in physics-informed neural networks, generative adversarial nets (GANs), Schrödinger bridges, and robust optimization pipelines use Sinkhorn divergences to enforce distributional constraints or supply differentiable distribution-matching objectives (Genevay et al., 2017, Nodozi et al., 2023, Wang et al., 2021). Distributionally robust optimization (DRO) with Sinkhorn balls replaces the hard Wasserstein supremum by a smooth log-sum-exp, making worst-case training tractable (Wang et al., 2021).
Specific algorithms, such as CO2-coresets, compress large datasets for regularized Sinkhorn loss minimization via spectral decompositions and MMD-based matching in RKHS, with polylogarithmic coreset size and near-optimal approximation error (Kokot et al., 28 Apr 2025).
7. Computational Complexity, Approximation Strategies, and Scalability
Each Sinkhorn iteration is ; total cost is for steps. For large , strategies include:
- Screening (Screenkhorn) to selectively freeze negligible dual components (Alaya et al., 2019),
- Mini-batching to fit cost matrices in memory,
- Warm-starting dual variables for successive optimization loops,
- Implicit differentiation for scalable backward passes with fixed memory requirements.
Approximation bounds characterize the trade-off between marginal violations and objective error; screened or compressed solves can achieve marginal error below and substantial computational savings given appropriate budgets (Alaya et al., 2019).
Sinkhorn-approximated losses synthesize optimal transport theory, entropic regularization, and iterative matrix scaling into a scalable, robust, and fully differentiable loss framework supporting a wide spectrum of modern statistical, optimization, and learning applications. With explicit control via and extensions to convex regularizers, partial mass regimes, and sample-streaming/distributionally robust pipelines, Sinkhorn divergences constitute a foundational tool for high-dimensional generative modeling, robust learning, and applied optimal transport in both discrete and continuous settings.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free