Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion-Driven Generative Framework for Molecular Conformation Prediction (2401.09451v2)

Published 22 Dec 2023 in q-bio.BM, cs.AI, cs.LG, and physics.chem-ph

Abstract: The task of deducing three-dimensional molecular configurations from their two-dimensional graph representations holds paramount importance in the fields of computational chemistry and pharmaceutical development. The rapid advancement of machine learning, particularly within the domain of deep generative networks, has revolutionized the precision of predictive modeling in this context. Traditional approaches often adopt a two-step strategy: initially estimating interatomic distances and subsequently refining the spatial molecular structure by solving a distance geometry problem. However, this sequential approach occasionally falls short in accurately capturing the intricacies of local atomic arrangements, thereby compromising the fidelity of the resulting structural models. Addressing these limitations, this research introduces a cutting-edge generative framework named \method{}. This framework is grounded in the principles of diffusion observed in classical non-equilibrium thermodynamics. \method{} views atoms as discrete entities and excels in guiding the reversal of diffusion, transforming a distribution of stochastic noise back into coherent molecular structures through a process akin to a Markov chain. This transformation commences with the initial representation of a molecular graph in an abstract latent space, culminating in the realization of three-dimensional structures via a sophisticated bilevel optimization scheme meticulously tailored to meet the specific requirements of the task. One of the formidable challenges in this modeling endeavor involves preserving roto-translational invariance to ensure that the generated molecular conformations adhere to the laws of physics. Extensive experimental evaluations confirm the efficacy of the proposed \method{} in comparison to state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. J. Pang, Z. Wang, J. Tang, M. Xiao, and N. Yin, “Sa-gda: Spectral augmentation for graph domain adaptation,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 309–318.
  2. N. Yin, L. Shen, M. Wang, L. Lan, Z. Ma, C. Chen, X.-S. Hua, and X. Luo, “Coco: A coupled contrastive framework for unsupervised domain adaptive graph classification,” in The 40th International Conference on Machine Learning, 2023.
  3. N. Yin, L. Shen, B. Li, M. Wang, X. Luo, C. Chen, Z. Luo, and X.-S. Hua, “Deal: An unsupervised domain adaptive framework for graph-level classification,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 3470–3479.
  4. D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 28, 2015.
  5. N. Yin, F. Feng, Z. Luo, X. Zhang, W. Wang, X. Luo, C. Chen, and X.-S. Hua, “Dynamic hypergraph convolutional network,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE).   IEEE, 2022, pp. 1621–1634.
  6. N. Yin, L. Shen, H. Xiong, B. Gu, C. Chen, X.-S. Hua, S. Liu, and X. Luo, “Messages are never propagated alone: Collaborative hypergraph neural network for time-series forecasting,” in IEEE Transactions on Pattern Analysis and Machine Intelligence.   IEEE, 2023.
  7. W. Jin, R. Barzilay, and T. Jaakkola, “Junction tree variational autoencoder for molecular graph generation,” in Proceedings of the International Conference on Machine Learning, 2017.
  8. C. Shi, M. Xu, Z. Zhu, W. Zhang, M. Zhang, and J. Tang, “Graphaf: a flow-based autoregressive model for molecular graph generation,” in Proceedings of the International Conference on Learning Representations, 2020.
  9. N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, and P. Riley, “Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds,” arXiv preprint arXiv:1802.08219, 2018.
  10. N. W. Gebauer, M. Gastegger, S. S. Hessmann, K.-R. Müller, and K. T. Schütt, “Inverse design of 3d molecular structures with conditional generative neural networks,” Nature communications, vol. 13, no. 1, p. 973, 2022.
  11. B. Jing, S. Eismann, P. Suriana, R. J. L. Townshend, and R. Dror, “Learning from protein structure with geometric vector perceptrons,” in Proceedings of the International Conference on Learning Representations, 2021.
  12. S. Batzner, A. Musaelian, L. Sun, M. Geiger, J. P. Mailoa, M. Kornbluth, N. Molinari, T. E. Smidt, and B. Kozinsky, “E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials,” Nature communications, vol. 13, no. 1, p. 2453, 2022.
  13. P. C. D. Hawkins, “Conformation generation: The state of the art,” Journal of Chemical Information and Modeling, p. 1747–1756, Aug 2017. [Online]. Available: http://dx.doi.org/10.1021/acs.jcim.7b00221
  14. M. De Vivo, M. Masetti, G. Bottegoni, and A. Cavalli, “Role of molecular dynamics and related methods in drug discovery,” Journal of medicinal chemistry, vol. 59, no. 9, pp. 4035–4061, 2016.
  15. A. J. Ballard, S. Martiniani, J. D. Stevenson, S. Somani, and D. J. Wales, “Exploiting the potential energy landscape to sample free energy,” Wiley Interdisciplinary Reviews: Computational Molecular Science, vol. 5, no. 3, pp. 273–289, 2015.
  16. E. Mansimov, O. Mahmood, S. Kang, and K. Cho, “Molecular geometry prediction using a deep generative graph neural network,” Scientific reports, vol. 9, no. 1, p. 20381, 2019.
  17. G. N. Simm and J. M. Hernández-Lobato, “A generative model for molecular distance geometry,” in Proceedings of the International Conference on Machine Learning, 2020.
  18. M. Xu, S. Luo, Y. Bengio, J. Peng, and J. Tang, “Learning neural generative dynamics for molecular conformation generation,” arXiv preprint arXiv:2102.10240, 2021.
  19. L. Liberti, C. Lavor, N. Maculan, and A. Mucherino, “Euclidean distance geometry and applications,” Siam Review,Siam Review, May 2012.
  20. A. W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. Qin, A. Žídek, A. W. Nelson, A. Bridgland et al., “Improved protein structure prediction using potentials from deep learning,” Nature, vol. 577, no. 7792, pp. 706–710, 2020.
  21. J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, K. Tunyasuvunakool, O. Ronneberger, R. Bates, A. Žídek, A. Bridgland et al., “High accuracy protein structure prediction using deep learning,” Fourteenth critical assessment of techniques for protein structure prediction (abstract book), vol. 22, no. 24, p. 2, 2020.
  22. L. Franceschi, P. Frasconi, S. Salzo, R. Grazzi, and M. Pontil, “Bilevel programming for hyperparameter optimization and meta-learning,” in International conference on machine learning.   PMLR, 2018, pp. 1568–1577.
  23. J. Köhler, L. Klein, and F. Noé, “Equivariant flows: exact likelihood generative learning for symmetric densities,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2020, pp. 5361–5370.
  24. M. Xu, W. Wang, S. Luo, C. Shi, Y. Bengio, R. Gomez-Bombarelli, and J. Tang, “An end-to-end framework for molecular conformation generation via bilevel programming,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2021, pp. 11 537–11 547.
  25. C. Shi, S. Luo, M. Xu, and J. Tang, “Learning gradient fields for molecular conformation generation,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2021, pp. 9558–9568.
  26. S. Luo, C. Shi, M. Xu, and J. Tang, “Predicting molecular conformation via dynamic graph score matching,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 34, 2021, pp. 19 784–19 795.
  27. Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 32, 2019.
  28. ——, “Improved techniques for training score-based generative models,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 33, 2020.
  29. D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” arXiv preprint arXiv:1610.02136, 2016.
  30. O. Ganea, L. Pattanaik, C. Coley, R. Barzilay, K. Jensen, W. Green, and T. Jaakkola, “Geomol: Torsional geometric generation of molecular 3d conformer ensembles,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 34, 2021, pp. 13 757–13 769.
  31. T. Gogineni, Z. Xu, E. Punzalan, R. Jiang, J. Kammeraad, A. Tewari, and P. Zimmerman, “Torsionnet: A reinforcement learning approach to sequential conformer search,” in Proceedings of the Conference on Neural Information Processing Systems, 2020.
  32. F. Noé, S. Olsson, J. Köhler, and H. Wu, “Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning,” Science, vol. 365, no. 6457, p. eaaw1147, 2019.
  33. M. AlQuraishi, “End-to-end differentiable learning of protein structure,” Cell systems, vol. 8, no. 4, pp. 292–301, 2019.
  34. J. Ingraham, A. Riesselman, C. Sander, and D. Marks, “Learning protein structure with a differentiable simulator,” in Proceedings of the International Conference on Learning Representations, 2019.
  35. J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko et al., “Highly accurate protein structure prediction with alphafold,” Nature, vol. 596, no. 7873, pp. 583–589, 2021.
  36. S. Luo and W. Hu, “Diffusion probabilistic models for 3d point cloud generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2837–2845.
  37. J. Chibane, T. Alldieck, and G. Pons-Moll, “Implicit functions in feature space for 3d shape reconstruction and completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6970–6981.
  38. M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. S. Cohen, “3d steerable cnns: Learning rotationally equivariant features in volumetric data,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 31, 2018.
  39. F. Fuchs, D. Worrall, V. Fischer, and M. Welling, “Se (3)-transformers: 3d roto-translation equivariant attention networks,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 33, 2020, pp. 1970–1981.
  40. B. K. Miller, M. Geiger, T. E. Smidt, and F. Noé, “Relevance of rotationally equivariant convolutions for predicting molecular properties,” arXiv preprint arXiv:2008.08461, 2020.
  41. V. G. Satorras, E. Hoogeboom, and M. Welling, “E (n) equivariant graph neural networks,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2021, pp. 9323–9332.
  42. V. Garcia Satorras, E. Hoogeboom, F. Fuchs, I. Posner, and M. Welling, “E (n) equivariant normalizing flows,” in Proceedings of the Conference on Neural Information Processing Systems, vol. 34, 2021, pp. 4181–4192.
  43. J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2015, pp. 2256–2265.
  44. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2017, pp. 1263–1272.
  45. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
  46. J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.
  47. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of the International Conference on Learning Representations, 2017.
  48. R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. Von Lilienfeld, “Quantum chemistry structures and properties of 134 kilo molecules,” Scientific data, vol. 1, no. 1, pp. 1–7, 2014.
  49. S. Riniker and G. A. Landrum, “Better informed distance geometry: Using what we know to improve conformation generation.” Journal of Chemical Information and Modeling, p. 2562–2574, Dec 2015. [Online]. Available: http://dx.doi.org/10.1021/acs.jcim.5b00654
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bobin Yang (1 paper)
  2. Zhenghan Chen (12 papers)
  3. Jie Deng (25 papers)
  4. Ruoxue Wu (3 papers)

Summary

We haven't generated a summary for this paper yet.