Papers
Topics
Authors
Recent
Search
2000 character limit reached

Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Published 29 Nov 2018 in cs.LG, cs.AI, cs.DB, and stat.ML | (1811.12823v5)

Abstract: Generative models are becoming a tool of choice for exploring the molecular space. These models learn on a large training dataset and produce novel molecular structures with similar properties. Generated structures can be utilized for virtual screening or training semi-supervised predictive models in the downstream tasks. While there are plenty of generative models, it is unclear how to compare and rank them. In this work, we introduce a benchmarking platform called Molecular Sets (MOSES) to standardize training and comparison of molecular generative models. MOSES provides a training and testing datasets, and a set of metrics to evaluate the quality and diversity of generated structures. We have implemented and compared several molecular generation models and suggest to use our results as reference points for further advancements in generative chemistry research. The platform and source code are available at https://github.com/molecularsets/moses.

Citations (578)

Summary

  • The paper introduces MOSES, a comprehensive benchmarking platform with a curated dataset and diverse evaluation metrics for molecular generation models.
  • The paper evaluates established generative models like CharRNN, VAE, and LatentGAN, highlighting differences in producing valid, unique, and novel molecules.
  • The paper details rigorous methodology including dataset filtering, SMILES and graph representations, and metrics such as Fréchet ChemNet Distance to drive advances in computational chemistry.

Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Overview

The research paper introduces the Molecular Sets (MOSES) platform, an invaluable resource for benchmarking generative models in chemistry. By offering a standardized dataset alongside a suite of evaluation metrics, MOSES addresses a pressing need in molecular generation research: the consistent assessment and comparison of model performance. This paper outlines the construction, evaluation, and baseline models provided within the platform.

Key Contributions

  1. Benchmarking Platform: MOSES provides a comprehensive environment for comparing molecular generation models by supplying training and testing datasets and a set of diverse metrics.
  2. Generative Models: The paper presents an array of established generative models, including CharRNN, VAE, AAE, JTN-VAE, LatentGAN, and others. Each model was trained and evaluated using consistent methodologies, allowing for an equitable comparison of their performance.
  3. Evaluation Metrics: The platform introduces various metrics such as Validity, Uniqueness, Novelty, and Fréchet ChemNet Distance (FCD). These metrics ensure multifaceted evaluation, measuring aspects like chemical validity, diversity, and similarity to training distributions.

Methodology

  • Dataset: The MOSES dataset is derived from the ZINC Clean Leads collection and underwent rigorous filtering to ensure the exclusion of molecules with undesirable characteristics.
  • Molecular Representations: Models utilize SMILES strings and molecular graphs, enabling flexibility in input representation for generative processes.
  • Metrics: Metrics such as Fragment and Scaffold Similarity, Internal Diversity, and Property Distributions offer nuanced insights into model effectiveness in generating chemically diverse and novel molecules.

Results

The paper reveals that different models excel in various metrics. For instance, the CharRNN model showcased superior performance in generating valid and unique molecules while achieving the best FCD scores. However, challenges like mode collapse and the ability to discover new chemical scaffolds remain areas for improvement in many models.

Implications and Future Directions

MOSES provides a structured approach to molecular generation, driving research towards more effective and generalizable models. The platform allows researchers to explore new architectures or training paradigms with the assurance of robust evaluation mechanisms.

Future developments could involve expanding the dataset and incorporating additional models and metrics, fostering advancements in various applications such as drug discovery and materials science.

Conclusion

MOSES establishes a much-needed standard for comparing molecular generative models, contributing significantly to the literature by allowing consistent evaluation of different methodologies. This platform is poised to facilitate the development of more capable and versatile generative models, ultimately advancing the field of computational chemistry.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 6 likes about this paper.