Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 63 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 49 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

On Diffusion Process in SE(3)-invariant Space (2403.01430v1)

Published 3 Mar 2024 in cs.LG

Abstract: Sampling viable 3D structures (e.g., molecules and point clouds) with SE(3)-invariance using diffusion-based models proved promising in a variety of real-world applications, wherein SE(3)-invariant properties can be naturally characterized by the inter-point distance manifold. However, due to the non-trivial geometry, we still lack a comprehensive understanding of the diffusion mechanism within such SE(3)-invariant space. This study addresses this gap by mathematically delineating the diffusion mechanism under SE(3)-invariance, via zooming into the interaction behavior between coordinates and the inter-point distance manifold through the lens of differential geometry. Upon this analysis, we propose accurate and projection-free diffusion SDE and ODE accordingly. Such formulations enable enhancing the performance and the speed of generation pathways; meanwhile offering valuable insights into other systems incorporating SE(3)-invariance.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Protein structure and sequence generation with equivariant denoising diffusion probabilistic models. arXiv preprint arXiv:2205.15019, 2022.
  2. Geom, energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 9(1):185, 2022.
  3. Total roto-translational variation. Numerische Mathematik, 142:611–666, 2019.
  4. Riemannian score-based generative modelling. Advances in Neural Information Processing Systems, 35:2406–2422, 2022.
  5. De Leeuw, J. Applications of convex analysis to multidimensional scaling. 2005.
  6. A family of embedded runge-kutta formulae. Journal of computational and applied mathematics, 6(1):19–26, 1980.
  7. Generative diffusion models on graphs: Methods and applications. arXiv preprint arXiv:2302.02591, 2023a.
  8. Ec-conf: A ultra-fast diffusion model for molecular conformation generation with equivariant consistency. arXiv preprint arXiv:2308.00237, 2023b.
  9. Geomol: Torsional geometric generation of molecular 3d conformer ensembles. Advances in Neural Information Processing Systems, 2021.
  10. Directional message passing for molecular graphs. arXiv preprint arXiv:2003.03123, 2020.
  11. Neural message passing for quantum chemistry. In International conference on machine learning, pp.  1263–1272. PMLR, 2017.
  12. Gower, J. C. Euclidean distance geometry. Math. Sci, 7(1):1–14, 1982.
  13. Denoising diffusion probabilistic models. Advances in neural information processing systems, 2020.
  14. Generating valid euclidean distance matrices. arXiv preprint arXiv:1910.03131, 2019.
  15. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325–1339, 2013.
  16. Learnable triangulation of human pose. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  7718–7727, 2019.
  17. Torsional diffusion for molecular conformer generation. In Advances in Neural Information Processing Systems, 2022.
  18. Self-supervised learning of 3d human pose using multi-view geometry. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  1077–1086, 2019.
  19. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Advances in Neural Information Processing Systems, 2022a.
  20. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022b.
  21. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  2837–2845, 2021.
  22. Predicting molecular conformation via dynamic graph score matching. In Advances in Neural Information Processing Systems, 2021.
  23. Dit-3d: Exploring plain diffusion transformers for 3d shape generation. arXiv preprint arXiv:2307.01831, 2023.
  24. Distance geometry optimization for protein structures. Journal of Global Optimization, 15:219–234, 1999.
  25. Öttinger, H. C. Stochastic processes in polymeric fluids: tools and examples for developing simulation algorithms. Springer Science & Business Media, 2012.
  26. Park Jr, J. Moments of the generalized rayleigh distribution. Quarterly of Applied Mathematics, 19(1):45–49, 1961.
  27. Fokker-planck equation. Springer, 1996.
  28. Schoenberg, I. J. Remarks to maurice frechet’s article“sur la definition axiomatique d’une classe d’espace distances vectoriellement applicable sur l’espace de hilbert. Annals of Mathematics, pp.  724–732, 1935.
  29. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In Advances in neural information processing systems, 2017.
  30. Learning gradient fields for molecular conformation generation. In International Conference on Machine Learning, 2021.
  31. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a.
  32. Generative modeling by estimating gradients of the data distribution. In Advances in neural information processing systems, 2019.
  33. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b.
  34. Consistency models. arXiv preprint arXiv:2303.01469, 2023.
  35. Sonine, N. Recherches sur les fonctions cylindriques et le développement des fonctions continues en séries. Mathematische Annalen, 16(1):1–80, 1880.
  36. Learning progressive point embeddings for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10266–10275, 2021.
  37. Wickelmaier, F. An introduction to mds. Sound Quality Research Unit, Aalborg University, Denmark, 46(5):1–26, 2003.
  38. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in neural information processing systems, volume 29, 2016.
  39. How powerful are graph neural networks? In International Conference on Learning Representations, 2018.
  40. Learning neural generative dynamics for molecular conformation generation. In International Conference on Learning Representations, 2020.
  41. Geodiff: A geometric diffusion model for molecular conformation generation. arXiv preprint arXiv:2203.02923, 2022.
  42. Se (3) diffusion model with application to protein backbone generation. arXiv preprint arXiv:2302.02277, 2023.
  43. Adversarial autoencoders for compact representations of 3d point clouds. Computer Vision and Image Understanding, 193:102921, 2020.
  44. Lion: Latent point diffusion models for 3d shape generation. arXiv preprint arXiv:2210.06978, 2022.
  45. Sdegen: learning to evolve molecular conformations from thermodynamic noise for conformation generation. Chemical Science, 14(6):1557–1568, 2023.
  46. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  5826–5835, 2021.
  47. Learning to decouple complex systems. In ICML, 2023.
  48. Molecular conformation generation via shifting scores. arXiv preprint arXiv:2309.09985, 2023.
  49. Direct molecular conformation generation. arXiv preprint arXiv:2202.01356, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube