Rot Mover’s Distance: Regularized OT
- Rot Mover’s Distance (RMD) is a generalized optimal transport metric that integrates smooth convex regularization to modulate plan smoothness and sparsity.
- It bridges classical Earth Mover’s Distance with minimal-regularized couplings by leveraging Bregman divergences and iterative projection algorithms like ASA and NASA.
- Empirical results, including applications in audio-scene classification, highlight RMD’s potential in enhancing OT-based kernel methods for pattern recognition.
The Rot Mover’s Distance (RMD) is a generalization of the classic Earth Mover’s Distance (EMD) within the framework of discrete optimal transport. RMD augments the standard transport problem by introducing a smooth convex regularization penalty on the joint transport plan, yielding a new class of metrics rooted in matrix nearness with respect to Bregman divergences. This construction enables the interpolation between classical EMD and minimal-regularized couplings, where the nature of regularization controls plan smoothness, sparsity, or other desired structure. RMD recovers established methods such as Sinkhorn–Knopp for entropic regularization and extends to a wide spectrum of regularizers and induced divergences, with efficient algorithms tailored to the structure of each case (Dessein et al., 2016).
1. Mathematical Formulation: Primal and Dual RMD
Given probability vectors in the -simplex, a nonnegative cost matrix , and a convex, smooth regularizer , RMD is formulated on the transport polytope
- Primal (Constrained) Formulation:
Given an "allowance" for the regularizer,
where solves .
- Dual (Penalized) Formulation:
Introducing a Lagrange parameter ,
0
For 1 below a threshold, there exists a unique 2 with 3 so that the primal and dual minimizers coincide. Classical EMD is recovered as 4, while 5 yields the minimal-6 coupling (Dessein et al., 2016).
2. Bregman-Projection Matrix-Nearness Equivalence
Let 7 be the Bregman-type information regularizer generated by a convex function 8, either separable (9) or general. The Fenchel conjugate 0 induces the Bregman divergence: 1 The dual RMD problem equivalently minimizes the Bregman divergence
2
where 3 is obtained by unconstrained minimization of 4. For the entropic regularizer, 5, yielding a Kullback–Leibler projection interpretation. This general Bregman-projection framework enables the use of projection algorithms for regularized OT (Dessein et al., 2016).
3. Iterative Bregman Projection Algorithms: ASA and NASA
Efficient solution of the RMD projection is based on iterative Bregman projections, with two principal algorithmic frameworks determined by the domain of the regularizer 6:
- Nonnegative Alternate Scaling Algorithm (NASA):
Used when 7 does not enforce 8. Dykstra's algorithm augments alternate Bregman projections with correction variables, cycling through nonnegativity, row-sum, and column-sum constraints. Newton–Raphson solves the one-dimensional per-row and per-column projection equations when 9 is separable. Each iteration maintains correction vectors to ensure convergence.
- Alternate Scaling Algorithm (ASA):
Applied when 0's domain is contained in 1, so 2 is implicit. The method alternates between row-sum and column-sum Bregman projections with no correction variables. For separable 3, updates decouple into per-row and per-column monotone equations efficiently solved by Newton–Raphson.
Both schemes generalize the classical projection-on-convex-sets (POCS) framework and leverage the explicit structure of 4 and 5 for efficient updates (Dessein et al., 2016).
4. Regularizer Families and Induced Divergences
The RMD framework supports a broad gallery of convex regularizers 6 (see Table 1), each yielding a distinct Bregman divergence 7 and associated geometric and statistical properties.
| Regularizer Type | 8 Definition | Induced 9 |
|---|---|---|
| Entropic (KL) | 0 | 1 |
| Burg (Itakura–Saito) | 2 | 3 |
| Fermi–Dirac | 4 | 5 |
| 6 (quasi-norms, 7) | 8 | — |
| 9 (norms, 0) | 1 | — |
| Euclidean (2) | 3 | 4 |
| Hellinger-type | 5 | — |
| Mahalanobis (quadratic form) | 6 | — |
Notably, the framework recovers Sinkhorn–Knopp scaling for KL entropic regularization (where 7 and Newton projections become matrix rescalings), but allows for fundamentally different plan structures, smoothing, and sparsification depending on the regularizer choice (Dessein et al., 2016).
5. Empirical Properties and Algorithmic Considerations
RMD exhibits several algorithmic and empirical characteristics:
- Interplay of 8 and 9:
Varying the regularizer parameter 0 yields a continuous interpolation between sharply optimal (EMD-like) and highly regularized transport plans. The geometry, such as anisotropy or smoothness, is strongly modulated by the nature of 1.
- Computational Complexity:
For moderate 2, Newton subproblems within ASA/NASA scale as 3 or 4 per projection, with overall quadratic complexity per outer iteration. Sinkhorn–Knopp (KL) admits the fastest implementation; ASA is empirically faster than NASA due to the absence of correction variables.
- Sparsity via Pruning:
Transport forbidden by 5 is handled by sparse extensions, simply excluding these indices from updates without affecting result correctness under broad conditions (Dessein et al., 2016).
6. Applications and Empirical Results
Synthetic experiments on two-mode densities demonstrate that tuning 6 and choosing different 7 yields qualitatively distinct mass redistributions, such as 8- versus 9-like smoothing and various anisotropic effects. In audio-scene classification benchmarks (specifically, DCASE16), RMD-induced kernels—where each segment is encoded as a GMM over MFCCs, with OT ground cost given by pairwise Jeffrey-divergence—achieve superior or competitive accuracy to classical EMD-based SVM kernels. The Hellinger-type and certain 0 penalties (1–2) outperform classic Euclidean (3) or Burg-IS regularizers in discriminative capacity. This suggests that fine-grained adjustment of 4 and 5 can significantly enhance OT-based kernel methods for pattern recognition and statistical tasks (Dessein et al., 2016).
7. Connections and Generalizations
RMD provides a principled interpolation and generalization over standard optimal transport, embedding the classic EMD, entropic regularization (Sinkhorn), and other divergences in a single algorithmic and theoretical scaffold. The Bregman-projection viewpoint enables leverage of convex duality and optimization-theoretic tools, including Newton–Raphson projection for separable 6, Dykstra’s algorithm for general convex settings, and efficient sparse extensions. The framework is compatible with a variety of regularizer classes encountered in machine learning and information geometry, supporting both spread-promoting and sparsity-inducing couplings according to analytic or empirical desiderata (Dessein et al., 2016).