Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Molecular De Novo Design through Transformer-based Reinforcement Learning (2310.05365v5)

Published 9 Oct 2023 in cs.LG and cs.AI

Abstract: In this work, we introduce a method to fine-tune a Transformer-based generative model for molecular de novo design. Leveraging the superior sequence learning capacity of Transformers over Recurrent Neural Networks (RNNs), our model can generate molecular structures with desired properties effectively. In contrast to the traditional RNN-based models, our proposed method exhibits superior performance in generating compounds predicted to be active against various biological targets, capturing long-term dependencies in the molecular structure sequence. The model's efficacy is demonstrated across numerous tasks, including generating analogues to a query structure and producing compounds with particular attributes, outperforming the baseline RNN-based methods. Our approach can be used for scaffold hopping, library expansion starting from a single molecule, and generating compounds with high predicted activity against biological targets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pengcheng Xu (26 papers)
  2. Tao Feng (153 papers)
  3. Tianfan Fu (53 papers)
  4. Siddhartha Laghuvarapu (3 papers)
  5. Jimeng Sun (181 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.