Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quasi-Monte Carlo for 3D Sliced Wasserstein (2309.11713v2)

Published 21 Sep 2023 in stat.ML, cs.GR, and cs.LG

Abstract: Monte Carlo (MC) integration has been employed as the standard approximation method for the Sliced Wasserstein (SW) distance, whose analytical expression involves an intractable expectation. However, MC integration is not optimal in terms of absolute approximation error. To provide a better class of empirical SW, we propose quasi-sliced Wasserstein (QSW) approximations that rely on Quasi-Monte Carlo (QMC) methods. For a comprehensive investigation of QMC for SW, we focus on the 3D setting, specifically computing the SW between probability measures in three dimensions. In greater detail, we empirically evaluate various methods to construct QMC point sets on the 3D unit-hypersphere, including the Gaussian-based and equal area mappings, generalized spiral points, and optimizing discrepancy energies. Furthermore, to obtain an unbiased estimator for stochastic optimization, we extend QSW to Randomized Quasi-Sliced Wasserstein (RQSW) by introducing randomness in the discussed point sets. Theoretically, we prove the asymptotic convergence of QSW and the unbiasedness of RQSW. Finally, we conduct experiments on various 3D tasks, such as point-cloud comparison, point-cloud interpolation, image style transfer, and training deep point-cloud autoencoders, to demonstrate the favorable performance of the proposed QSW and RQSW variants.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Learning representations and generative models for 3d point clouds. In International conference on machine learning, pp.  40–49. PMLR, 2018.
  2. Point sets on the sphere with small spherical cap discrepancy. Discrete & Computational Geometry, 48(4):990–1024, 2012.
  3. Meta optimal transport. International Conference on Machine Learning, 2023.
  4. Kinjal Basu. Quasi-Monte Carlo Methods in Non-Cubical Spaces. Stanford University, 2016.
  5. Efficient gradient flows in sliced-Wasserstein space. Transactions on Machine Learning Research, 2022.
  6. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 1(51):22–45, 2015.
  7. Qmc designs: optimal order quasi monte carlo integration schemes on the sphere. Mathematics of computation, 83(290):2821–2851, 2014.
  8. Quasi–monte carlo rules for numerical integration over the unit sphere. Numerische Mathematik, 121(3):473–502, 2012.
  9. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
  10. Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems, pp. 3730–3739, 2017.
  11. Randomization of number theoretic methods for multiple integration. SIAM Journal on Numerical Analysis, 13(6):904–914, 1976.
  12. Henri Faure. Discrépance de suites associées à un système de numération (en dimension s). Acta arithmetica, 41(4):337–351, 1982.
  13. Optimal transport for diffeomorphic registration. In Medical Image Computing and Computer Assisted Intervention- MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part I 20, pp.  291–299. Springer, 2017.
  14. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021. URL http://jmlr.org/papers/v22/20-451.html.
  15. M Götz. On the distribution of weighted extremal points on a surface in. Potential Analysis, 13(4):345–359, 2000.
  16. J Halton and G Smith. Radical inverse quasi-random point sequence, algorithm 247. Commun. ACM, 7(12):701, 1964.
  17. John Hammersley. Monte carlo methods. Springer Science & Business Media, 2013.
  18. A comparison of popular point configurations on 𝕊2superscript𝕊2\mathbb{S}^{2}blackboard_S start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. arXiv preprint arXiv:1607.04590, 2016.
  19. An enumerative formula for the spherical cap discrepancy. Journal of Computational and Applied Mathematics, 390:113409, 2021.
  20. Edmund Hlawka. Funktionen von beschränkter variatiou in der theorie der gleichverteilung. Annali di Matematica Pura ed Applicata, 54(1):325–333, 1961.
  21. Multilevel clustering via Wasserstein means. In International Conference on Machine Learning, pp. 1501–1509, 2017.
  22. Learning to measure the point cloud reconstruction loss in a representation space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12208–12217, 2023.
  23. Remark on algorithm 659: Implementing Sobol’s quasirandom sequence generator. ACM Transactions on Mathematical Software (TOMS), 29(1):49–57, 2003.
  24. Alexander Keller. A quasi-monte carlo algorithm for the global illumination problem in the radiosity setting. In Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing: Proceedings of a conference at the University of Nevada, Las Vegas, Nevada, USA, June 23–25, 1994, pp.  239–251. Springer, 1995.
  25. Softflow: Probabilistic framework for normalizing flow on manifolds. Advances in Neural Information Processing Systems, 33:16388–16397, 2020.
  26. JF Koksma. Een algemeene stelling uit de theorie der gelijkmatige verdeeling modulo 1. Mathematica B (Zutphen), 11(7-11):43, 1942.
  27. Sliced Wasserstein distance for learning Gaussian mixture models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.  3427–3436, 2018.
  28. Multiscale nonrigid point cloud registration using rotation-invariant sliced-Wasserstein distance via laplace–beltrami eigenmap. SIAM Journal on Imaging Sciences, 10(2):449–483, 2017.
  29. Diffeomorphic mesh deformation via efficient optimal transport for cortical surface reconstruction. In The Twelfth International Conference on Learning Representations, 2024.
  30. Sliced Wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10285–10295, 2019.
  31. Hilbert curve projection distance for distribution comparison. arXiv preprint arXiv:2205.15059, 2022.
  32. Demystifying orthogonal monte carlo and beyond. Advances in Neural Information Processing Systems, 33:8030–8041, 2020.
  33. Pertti Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability. Number 44. Cambridge university press, 1999.
  34. Robb J Muirhead. Aspects of multivariate statistical theory. John Wiley & Sons, 2009.
  35. Statistical and topological properties of sliced probability divergences. Advances in Neural Information Processing Systems, 33:20802–20812, 2020.
  36. Energy-based sliced Wasserstein distance. Advances in Neural Information Processing Systems, 2023.
  37. Sliced Wasserstein estimation with control variates. In The Twelfth International Conference on Learning Representations, 2024.
  38. Distributional sliced-Wasserstein and applications to generative modeling. In International Conference on Learning Representations, 2021.
  39. Self-attention amortized distributional projection optimization for sliced Wasserstein point-cloud reconstruction. Proceedings of the 40th International Conference on Machine Learning, 2023.
  40. Sliced Wasserstein with random-path projecting directions. arXiv preprint arXiv:2401.15889, 2024.
  41. Harald Niederreiter. Random number generation and quasi-Monte Carlo methods. SIAM, 1992.
  42. Art B Owen. Randomly permuted (t, m, s)-nets and (t, s)-sequences. In Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing: Proceedings of a conference at the University of Nevada, Las Vegas, Nevada, USA, June 23–25, 1994, pp.  299–317. Springer, 1995.
  43. Art B Owen. Monte carlo theory, methods and examples. 2013.
  44. Sliced optimal transport sampling. ACM Trans. Graph., 39(4):99, 2020.
  45. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.
  46. Computational optimal transport, 2020.
  47. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  652–660, 2017.
  48. Minimal discrete energy on the sphere. Mathematical Research Letters, 1(6):647–662, 1994.
  49. Improving GANs using optimal transport. In International Conference on Learning Representations, 2018.
  50. Accurate point cloud registration with robust optimal transport. Advances in Neural Information Processing Systems, 34:5373–5389, 2021.
  51. I Sobol. The distribution of points in a cube and the accurate evaluation of integrals (in russian) zh. Vychisl. Mat. i Mater. Phys, 7:784–802, 1967.
  52. Eloi Tanguy. Convergence of sgd for training neural networks with sliced Wasserstein losses. arXiv preprint arXiv:2307.11714, 2023.
  53. Cédric Villani. Optimal transport: Old and New. Springer, 2008.
  54. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  1912–1920, 2015.
  55. Sliced Wasserstein variational inference. In Fourth Symposium on Advances in Approximate Bayesian Inference, 2021.
Citations (9)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets