Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InteractionNet: Modeling and Explaining of Noncovalent Protein-Ligand Interactions with Noncovalent Graph Neural Network and Layer-Wise Relevance Propagation (2005.13438v1)

Published 12 May 2020 in q-bio.BM, cs.LG, and stat.ML

Abstract: Expanding the scope of graph-based, deep-learning models to noncovalent protein-ligand interactions has earned increasing attention in structure-based drug design. Modeling the protein-ligand interactions with graph neural networks (GNNs) has experienced difficulties in the conversion of protein-ligand complex structures into the graph representation and left questions regarding whether the trained models properly learn the appropriate noncovalent interactions. Here, we proposed a GNN architecture, denoted as InteractionNet, which learns two separated molecular graphs, being covalent and noncovalent, through distinct convolution layers. We also analyzed the InteractionNet model with an explainability technique, i.e., layer-wise relevance propagation, for examination of the chemical relevance of the model's predictions. Separation of the covalent and noncovalent convolutional steps made it possible to evaluate the contribution of each step independently and analyze the graph-building strategy for noncovalent interactions. We applied InteractionNet to the prediction of protein-ligand binding affinity and showed that our model successfully predicted the noncovalent interactions in both performance and relevance in chemical interpretation.

Citations (4)

Summary

We haven't generated a summary for this paper yet.