Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter (2310.12798v4)

Published 19 Oct 2023 in cs.CL and cs.MM

Abstract: LLMs (LMs) have demonstrated impressive molecule understanding ability on various 1D text-related tasks. However, they inherently lack 2D graph perception - a critical ability of human professionals in comprehending molecules' topological structures. To bridge this gap, we propose MolCA: Molecular Graph-LLMing with Cross-Modal Projector and Uni-Modal Adapter. MolCA enables an LM (e.g., Galactica) to understand both text- and graph-based molecular contents via the cross-modal projector. Specifically, the cross-modal projector is implemented as a Q-Former to connect a graph encoder's representation space and an LM's text space. Further, MolCA employs a uni-modal adapter (i.e., LoRA) for the LM's efficient adaptation to downstream tasks. Unlike previous studies that couple an LM with a graph encoder via cross-modal contrastive learning, MolCA retains the LM's ability of open-ended text generation and augments it with 2D graph information. To showcase its effectiveness, we extensively benchmark MolCA on tasks of molecule captioning, IUPAC name prediction, and molecule-text retrieval, on which MolCA significantly outperforms the baselines. Our codes and checkpoints can be found at https://github.com/acharkq/MolCA.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhiyuan Liu (433 papers)
  2. Sihang Li (32 papers)
  3. Yanchen Luo (6 papers)
  4. Hao Fei (105 papers)
  5. Yixin Cao (138 papers)
  6. Kenji Kawaguchi (147 papers)
  7. Xiang Wang (279 papers)
  8. Tat-Seng Chua (360 papers)
Citations (61)