Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GeomCLIP: Contrastive Geometry-Text Pre-training for Molecules (2411.10821v1)

Published 16 Nov 2024 in cs.LG and q-bio.BM

Abstract: Pretraining molecular representations is crucial for drug and material discovery. Recent methods focus on learning representations from geometric structures, effectively capturing 3D position information. Yet, they overlook the rich information in biomedical texts, which detail molecules' properties and substructures. With this in mind, we set up a data collection effort for 200K pairs of ground-state geometric structures and biomedical texts, resulting in a PubChem3D dataset. Based on this dataset, we propose the GeomCLIP framework to enhance for multi-modal representation learning from molecular structures and biomedical text. During pre-training, we design two types of tasks, i.e., multimodal representation alignment and unimodal denoising pretraining, to align the 3D geometric encoder with textual information and, at the same time, preserve its original representation power. Experimental results show the effectiveness of GeomCLIP in various tasks such as molecular property prediction, zero-shot text-molecule retrieval, and 3D molecule captioning. Our code and collected dataset are available at \url{https://github.com/xiaocui3737/GeomCLIP}

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Teng Xiao (40 papers)
  2. Chao Cui (4 papers)
  3. Huaisheng Zhu (13 papers)
  4. Vasant G. Honavar (5 papers)

Summary

GeomCLIP: Contrastive Geometry-Text Pre-training for Molecules

The paper "GeomCLIP: Contrastive Geometry-Text Pre-training for Molecules" addresses a significant gap in molecular representation learning by proposing a method to integrate 3D molecular structures with textual descriptions. The approach, termed GeomCLIP, builds upon the concepts introduced by CLIP in vision-language alignment but adapts them to the domain of molecular data and biomedical text. This alignment of 3D geometric structures with text is aimed at enhancing pre-trained models used in drug and material discovery tasks such as property prediction, molecule retrieval, and molecule captioning.

A novel contribution of this paper is the construction of the PubChem3D dataset, which consists of over 200,000 pairs of ground-state geometric structures and associated biomedical text descriptions. Ground-state 3D geometries are derived from high-quality sources like PubChemQC and GEOM, addressing the noise issues seen in other datasets that use computationally inferred structures via tools like RDKit.

A significant departure from earlier works is the explicit inclusion of 3D spatial information in model pre-training. Previous methods often relied on 1D SMILES sequences or 2D graphs, embedding a limitation on the expressiveness necessary for capturing true molecular properties, which are inherently 3D. GeomCLIP seeks to rectify this by utilizing transformer-based architectures for both the geometric and text encoders, facilitating the learning of embeddings that bridge the gap between the spatial configurations of molecules and their descriptive text.

Performance evaluations demonstrate that GeomCLIP stands out in several downstream tasks. For molecular property prediction tasks, the approach shows significant performance improvements over existing state-of-the-art models such as Uni-Mol and 3D-MoLM. The reported enhancements in mean absolute error across 12 quantum mechanics prediction tasks underscore the model's capacity to capture complex molecular properties linked to their 3D structures. This is a strong numerical result that reaffirms the merit of integrating 3D geometry and text in molecular modeling.

In the area of molecule-text retrieval, GeomCLIP exhibits its efficacy by attaining competitive accuracy metrics like higher recall rates in retrieval tasks, a testament to the effectiveness of its learned embeddings in connecting the semantic and geometric spaces. Moreover, the molecule captioning task shows that the GeomCLIP model generates textual descriptions that align well with molecular structures, outperforming baseline models like MolT5 in BLEU and ROUGE metrics.

The introduction of a denoising pretraining task further strengthens the GeomCLIP model by helping preserve unimodal geometric information while injecting informative text-based context. This ability to merge multimodal data into a coherent learning framework paves the way for future exploration in AI models that leverage rich, multidimensional datasets.

The broader implications of this work in AI and computational drug discovery are notable. The GeomCLIP framework provides a foundational step towards more integrated and accurate representations that can better inform virtual screening, lead optimization, and other drug discovery processes. This work also encourages the development of more refined datasets that capture the nuanced interplay between molecular structures and their biological narratives. Future challenges include scaling the dataset and model to encompass more diverse molecular families, as well as further optimization to enhance computational efficiency.

In summary, GeomCLIP's integration of 3D geometries with textual data represents an impactful advance in bridging data modalities, enriching the molecular representation landscape with improved interpretability and application performance. This work illustrates the potential of cross-disciplinary innovation, linking molecular sciences with textual understanding in AI research.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com