Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Make Generalizable and Diverse Predictions for Retrosynthesis (1910.09688v1)

Published 21 Oct 2019 in cs.LG and stat.ML

Abstract: We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. Given a target compound, the task is to predict the likely chemical reactants to produce the target. This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules. Building on top of the popular Transformer architecture, we propose two novel pre-training methods that construct relevant auxiliary tasks (plausible reactions) for our problem. Furthermore, we incorporate a discrete latent variable model into the architecture to encourage the model to produce a diverse set of alternative predictions. On the 50k subset of reaction examples from the United States patent literature (USPTO-50k) benchmark dataset, our model greatly improves performance over the baseline, while also generating predictions that are more diverse.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Benson Chen (7 papers)
  2. Tianxiao Shen (8 papers)
  3. Tommi S. Jaakkola (42 papers)
  4. Regina Barzilay (106 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.