Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 52 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 100 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Gradient estimators for normalising flows (2202.01314v2)

Published 2 Feb 2022 in stat.ML, cond-mat.stat-mech, cs.LG, and hep-lat

Abstract: Recently a machine learning approach to Monte-Carlo simulations called Neural Markov Chain Monte-Carlo (NMCMC) is gaining traction. In its most popular form it uses neural networks to construct normalizing flows which are then trained to approximate the desired target distribution. In this contribution we present new gradient estimator for Stochastic Gradient Descent algorithm (and the corresponding \texttt{PyTorch} implementation) and show that it leads to better training results for $\phi4$ model. For this model our estimator achieves the same precision in approximately half of the time needed in standard approach and ultimately provides better estimates of the free energy. We attribute this effect to the lower variance of the new estimator. In contrary to the standard learning algorithm our approach does not require estimation of the action gradient with respect to the fields, thus has potential of further speeding up the training for models with more complicated actions.

Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.