Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning (2405.10343v1)

Published 15 May 2024 in q-bio.BM, cs.AI, and cs.LG

Abstract: Recently, a noticeable trend has emerged in developing pre-trained foundation models in the domains of CV and NLP. However, for molecular pre-training, there lacks a universal model capable of effectively applying to various categories of molecular tasks, since existing prevalent pre-training methods exhibit effectiveness for specific types of downstream tasks. Furthermore, the lack of profound understanding of existing pre-training methods, including 2D graph masking, 2D-3D contrastive learning, and 3D denoising, hampers the advancement of molecular foundation models. In this work, we provide a unified comprehension of existing pre-training methods through the lens of contrastive learning. Thus their distinctions lie in clustering different views of molecules, which is shown beneficial to specific downstream tasks. To achieve a complete and general-purpose molecular representation, we propose a novel pre-training framework, named UniCorn, that inherits the merits of the three methods, depicting molecular views in three different levels. SOTA performance across quantum, physicochemical, and biological tasks, along with comprehensive ablation study, validate the universality and effectiveness of UniCorn.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shikun Feng (37 papers)
  2. Yuyan Ni (14 papers)
  3. Minghao Li (44 papers)
  4. Yanwen Huang (12 papers)
  5. Zhi-Ming Ma (56 papers)
  6. Wei-Ying Ma (39 papers)
  7. Yanyan Lan (87 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.