Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A hybrid quantum-classical fusion neural network to improve protein-ligand binding affinity predictions for drug discovery (2309.03919v3)

Published 6 Sep 2023 in quant-ph and cs.LG

Abstract: The field of drug discovery hinges on the accurate prediction of binding affinity between prospective drug molecules and target proteins, especially when such proteins directly influence disease progression. However, estimating binding affinity demands significant financial and computational resources. While state-of-the-art methodologies employ classical ML techniques, emerging hybrid quantum machine learning (QML) models have shown promise for enhanced performance, owing to their inherent parallelism and capacity to manage exponential increases in data dimensionality. Despite these advances, existing models encounter issues related to convergence stability and prediction accuracy. This paper introduces a novel hybrid quantum-classical deep learning model tailored for binding affinity prediction in drug discovery. Specifically, the proposed model synergistically integrates 3D and spatial graph convolutional neural networks within an optimized quantum architecture. Simulation results demonstrate a 6% improvement in prediction accuracy relative to existing classical models, as well as a significantly more stable convergence performance compared to previous classical approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. L. Domingo (7 papers)
  2. M. Chehimi (1 paper)
  3. S. Banerjee (86 papers)
  4. S. He Yuxun (1 paper)
  5. S. Konakanchi (1 paper)
  6. L. Ogunfowora (1 paper)
  7. S. Roy (117 papers)
  8. S. Selvaras (1 paper)
  9. M. Djukic (2 papers)
  10. C. Johnson (48 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.