Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Uni-Mol2: Exploring Molecular Pretraining Model at Scale (2406.14969v2)

Published 21 Jun 2024 in cs.LG and cs.AI

Abstract: In recent years, pretraining models have made significant advancements in the fields of NLP, computer vision (CV), and life sciences. The significant advancements in NLP and CV are predominantly driven by the expansion of model parameters and data size, a phenomenon now recognized as the scaling laws. However, research exploring scaling law in molecular pretraining models remains unexplored. In this work, we present Uni-Mol2 , an innovative molecular pretraining model that leverages a two-track transformer to effectively integrate features at the atomic level, graph level, and geometry structure level. Along with this, we systematically investigate the scaling law within molecular pretraining models, characterizing the power-law correlations between validation loss and model size, dataset size, and computational resources. Consequently, we successfully scale Uni-Mol2 to 1.1 billion parameters through pretraining on 800 million conformations, making it the largest molecular pretraining model to date. Extensive experiments show consistent improvement in the downstream tasks as the model size grows. The Uni-Mol2 with 1.1B parameters also outperforms existing methods, achieving an average 27% improvement on the QM9 and 14% on COMPAS-1D dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiaohong Ji (5 papers)
  2. Zhifeng Gao (36 papers)
  3. Hang Zheng (42 papers)
  4. Linfeng Zhang (160 papers)
  5. Guolin Ke (43 papers)
  6. Weinan E (127 papers)
  7. Zhen Wang (571 papers)
Citations (1)