Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

State Predictive Information Bottleneck (2011.10127v2)

Published 19 Nov 2020 in physics.chem-ph, cond-mat.stat-mech, physics.bio-ph, and physics.comp-ph

Abstract: The ability to make sense of the massive amounts of high-dimensional data generated from molecular dynamics (MD) simulations is heavily dependent on the knowledge of a low dimensional manifold (parameterized by a reaction coordinate or RC) that typically distinguishes between relevant metastable states and which captures the relevant slow dynamics of interest. Methods based on machine learning and artificial intelligence have been proposed over the years to deal with learning such low-dimensional manifolds, but they are often criticized for a disconnect from more traditional and physically interpretable approaches. To deal with such concerns, in this work, we propose a deep learning based State Predictive Information Bottleneck (SPIB) approach to learn the RC from high dimensional molecular simulation trajectories. We demonstrate analytically and numerically how the RC learnt in this approach is deeply connected to the committor in chemical physics, and can be used to accurately identify transition states. A crucial hyperparameter in this approach is the time-delay, or how far into the future the algorithm should make predictions about. Through careful comparisons for benchmark systems, we demonstrate that this hyperparameter choice gives useful control over how coarse-grained we want the metastable state classification of the system to be. We thus believe that this work represents a step forward in systematic application of deep learning based ideas to molecular simulations in a way that bridges the gap between artificial intelligence and traditional chemical physics.

Summary

We haven't generated a summary for this paper yet.