Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MolXPT: Wrapping Molecules with Text for Generative Pre-training (2305.10688v2)

Published 18 May 2023 in cs.CL

Abstract: Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified LLM of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a LLM for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zequn Liu (14 papers)
  2. Wei Zhang (1489 papers)
  3. Yingce Xia (53 papers)
  4. Lijun Wu (113 papers)
  5. Shufang Xie (29 papers)
  6. Tao Qin (201 papers)
  7. Ming Zhang (313 papers)
  8. Tie-Yan Liu (242 papers)
Citations (59)

Summary

We haven't generated a summary for this paper yet.