Monge-Kantorovich Transport Map
- The Monge-Kantorovich transport map is a formulation that computes optimal mass-preserving mappings between probability distributions by minimizing transport costs.
- It is applied in image processing for tasks like color harmonization, ensuring smooth integration of foreground and background colors through deterministic transformations.
- Implementations such as MKL-Harmonizer use linear transformations based on distribution moments, achieving robust, real-time performance in augmented reality.
The Monge-Kantorovich transport map denotes the optimal mass-preserving mapping between two probability distributions that minimizes a specified transport cost. In the context of image processing, such as color harmonization, this concept provides a mathematical framework for aligning the color distributions of different image regions. The Monge-Kantorovich transport map is central to constructing deterministic transformations that push a source distribution onto a target while minimizing quadratic costs, with critical application in efficient, real-time harmonization algorithms for augmented reality and image compositing (Larchenko et al., 16 Nov 2025).
1. Optimal Transport and the Monge-Kantorovich Problem
Given a composite image constructed by pasting a foreground object with color distribution into a background scene, the objective is to recolor the foreground so its pixel values match the statistical properties of the background, achieving a coherent visual integration. The deterministic Monge transport map must satisfy the mass-preservation constraint,
where is the Jacobian matrix of , and is the target distribution. Among all such maps, the Monge formulation selects the one minimizing the expected squared Euclidean distance: Solvability and uniqueness under mild regularity conditions are established in the foundational works of Villani (2009).
2. The Monge–Kantorovich Linear Map
When both source and target color distributions are modeled as Gaussian measures, i.e., for the foreground and for the background, the quadratic optimal transport admits a closed-form, linear solution: where
Hence, the Monge-Kantorovich linear map ("MKL", Editor's term) reduces optimal color harmonization to a global affine transformation estimated solely from the first and second moments of source and target. To ensure values remain within the allowable color cube , outputs are clipped, and the process is equivalent to a Euclidean projection as formally proven (Lemma 1, (Larchenko et al., 16 Nov 2025)).
3. Architecture: MKL-Harmonizer Implementation
MKL-Harmonizer introduces a neural encoder to predict the optimal affine mapping in real time, targeting mobile and edge devices. The encoder employs EfficientNet-B0 (5.3M parameters, 0.39 GFLOPs for input) as its backbone. The model ingests a RGB image concatenated with a binary mask (foreground indicator channel), leveraging a modified convolutional stem and a series of MBConv blocks with Swish activations. The global feature representation is reduced via pooling and a fully-connected head to a 12-dimensional output, parameterizing either (mean, covariances) or the direct mapping (affine matrix, shift). Empirical results indicate that direct regression is more robust in practice.
4. Training Procedures and Loss Functions
Supervised training is conducted on the iHarmony4-clean dataset, a refinement of iHarmony4 with artifact suppression via unmasked region replacement. The network is optimized using Adam, batch size 64, over 210 epochs with a decaying learning rate. The loss function combines:
- Label loss: (L1 norm on MKL parameters)
- Content loss: (per-pixel L1 after MKL filtering, within the mask)
- Total loss: with
This combination ensures both parameter regression accuracy and perceptual fidelity in the harmonized object. Regularization via promotes sharper, more stable solutions, while the per-pixel loss mitigates the risk of collapse to the identity mapping.
5. Inference Pipeline and Computational Complexity
The MKL-Harmonizer is optimized for edge deployment, with the full model (EfficientNet-B0 backbone plus head) requiring approximately 5.3M parameters and 0.4 GFLOPs per image. The on-device workflow is as follows:
- Render RGB frame and binary mask to memory.
- Encoder predicts 12 MKL parameters in a single forward pass.
- Compute per-pixel color transformation with clipping, efficiently implemented on GPU. Performance metrics include:
- RTX 4060Ti: 175 it/s (), 167 it/s (), 137 it/s (), 41 it/s ()
- Mobile (Pixel 4a/7): 12–15 fps at , with potential for doubling via zero-copy optimizations.
6. Quantitative Benchmarks and User Evaluation
Table: Quantitative Results (iHarmony4-clean, )
| Method | MSE | PSNR | foreground-MSE |
|---|---|---|---|
| Ideal Linear OT | 7.6 | 43.6 | 45.9 |
| PCT-Net | 29.1 | 38.0 | 201 |
| Harmonizer | 40.1 | 36.6 | 258 |
| MKL-Harmonizer | 65.0 | 34.1 | 438 |
| INR-Harmonization | 67.2 | 35.3 | 392 |
| Unharmonized | 182 | 31.0 | 984 |
User studies conducted on real ARCore composites (327 images) using a four-way forced-choice protocol found that MKL-Harmonizer achieved the highest mean opinion score (MOS) among tested algorithms. In the speed vs. perceptual quality trade-off, MKL-Harmonizer demonstrated a leading position with high MOS and maximal processing speed (Larchenko et al., 16 Nov 2025).
7. Observations, Biases, and Data Contributions
Key experimental findings reveal the power of linear MKL filters—Ideal Linear OT closely approaches deep counterparts on standard benchmarks. losses on affine filter parameters yield superior sharpness and stability. MKL-Harmonizer avoids overfitting to mask leakage ("exposure bias") due to its global filter architecture, in contrast to pixelwise encoder–decoder nets, which are susceptible to artifact generation at high resolution (e.g., upsampling striping, JPEG blocks).
A new AR dataset of 327 real-world composite images, annotated with binary masks and spanning diverse lighting, objects, and conditions, is provided. Data gathering uses a minimally modified ARCore pipeline to capture camera, rendered object, and precise mask. Full source code, data, and capture tools are released at https://github.com/maria-larchenko/mkl-harmonizer, providing a platform for future AR harmonization studies free of synthetic dataset biases (Larchenko et al., 16 Nov 2025).