Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Token-Mol 1.0: Tokenized drug design with large language model (2407.07930v2)

Published 10 Jul 2024 in q-bio.BM and cs.LG

Abstract: Significant interests have recently risen in leveraging sequence-based LLMs for drug design. However, most current applications of LLMs in drug discovery lack the ability to comprehend three-dimensional (3D) structures, thereby limiting their effectiveness in tasks that explicitly involve molecular conformations. In this study, we introduced Token-Mol, a token-only 3D drug design model. This model encodes all molecular information, including 2D and 3D structures, as well as molecular property data, into tokens, which transforms classification and regression tasks in drug discovery into probabilistic prediction problems, thereby enabling learning through a unified paradigm. Token-Mol is built on the transformer decoder architecture and trained using random causal masking techniques. Additionally, we proposed the Gaussian cross-entropy (GCE) loss function to overcome the challenges in regression tasks, significantly enhancing the capacity of LLMs to learn continuous numerical values. Through a combination of fine-tuning and reinforcement learning (RL), Token-Mol achieves performance comparable to or surpassing existing task-specific methods across various downstream tasks, including pocket-based molecular generation, conformation generation, and molecular property prediction. Compared to existing molecular pre-trained models, Token-Mol exhibits superior proficiency in handling a wider range of downstream tasks essential for drug design. Notably, our approach improves regression task accuracy by approximately 30% compared to similar token-only methods. Token-Mol overcomes the precision limitations of token-only models and has the potential to integrate seamlessly with general models such as ChatGPT, paving the way for the development of a universal artificial intelligence drug design model that facilitates rapid and high-quality drug design by experts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (20)
  1. Jike Wang (10 papers)
  2. Rui Qin (19 papers)
  3. Mingyang Wang (36 papers)
  4. Meijing Fang (1 paper)
  5. Yangyang Zhang (49 papers)
  6. Yuchen Zhu (30 papers)
  7. Qun Su (4 papers)
  8. Qiaolin Gou (1 paper)
  9. Chao Shen (168 papers)
  10. Odin Zhang (18 papers)
  11. Zhenxing Wu (4 papers)
  12. Dejun Jiang (7 papers)
  13. Xujun Zhang (13 papers)
  14. Huifeng Zhao (6 papers)
  15. Xiaozhe Wan (1 paper)
  16. Zhourui Wu (1 paper)
  17. Liwei Liu (23 papers)
  18. Yu Kang (61 papers)
  19. Chang-Yu Hsieh (63 papers)
  20. Tingjun Hou (23 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com