Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Flow Matching: Learning Straight Trajectories in Just One Step (2403.13117v3)

Published 19 Mar 2024 in stat.ML and cs.LG

Abstract: Over the several recent years, there has been a boom in development of Flow Matching (FM) methods for generative modeling. One intriguing property pursued by the community is the ability to learn flows with straight trajectories which realize the Optimal Transport (OT) displacements. Straightness is crucial for the fast integration (inference) of the learned flow's paths. Unfortunately, most existing flow straightening methods are based on non-trivial iterative FM procedures which accumulate the error during training or exploit heuristics based on minibatch OT. To address these issues, we develop and theoretically justify the novel \textbf{Optimal Flow Matching} (OFM) approach which allows recovering the straight OT displacement for the quadratic transport in just one FM step. The main idea of our approach is the employment of vector field for FM which are parameterized by convex functions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Input convex neural networks. In International Conference on Machine Learning, pages 146–155. PMLR, 2017.
  2. On parameter estimation with the wasserstein distance. Information and Inference: A Journal of the IMA, 8(4):657–676, 2019.
  3. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013.
  4. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn’s algorithm. In International conference on machine learning, pages 1367–1376. PMLR, 2018.
  5. Learning with minibatch wasserstein: asymptotic and gradient properties. arXiv preprint arXiv:1910.04091, 2019.
  6. Stochastic optimization for large-scale optimal transport. Advances in neural information processing systems, 29, 2016.
  7. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  8. Estimating barycenters of distributions with neural optimal transport, 2024.
  9. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747, 2022.
  10. Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport. arXiv preprint arXiv:2209.14577, 2022.
  11. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022.
  12. Most: Multi-source domain adaptation via optimal transport for student-teacher learning. In Uncertainty in Artificial Intelligence, pages 225–235. PMLR, 2021.
  13. Multisample flow matching: Straightening flows with minibatch couplings, 2023.
  14. Generative modeling with optimal transport maps. arXiv preprint arXiv:2110.02999, 2021.
  15. Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
  16. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  17. Conditional flow matching: Simulation-free dynamic optimal transport. arXiv preprint arXiv:2302.00482, 2(3), 2023.
  18. Cédric Villani. Topics in optimal transportation, volume 58. American Mathematical Soc., 2021.
  19. Unbalanced feature transport for exemplar-based image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15028–15038, June 2021.
Citations (4)

Summary

We haven't generated a summary for this paper yet.