Convergence of Adaptive Biasing Potential methods for diffusions (1603.08088v1)
Abstract: We prove the consistency of an adaptive importance sampling strategy based on biasing the potential energy function $V$ of a diffusion process $dX_t0=-\nabla V(X_t0)dt+dW_t$; for the sake of simplicity, periodic boundary conditions are assumed, so that $X_t0$ lives on the flat $d$-dimensional torus. The goal is to sample its invariant distribution $\mu=Z{-1}\exp\bigl(-V(x)\bigr)\,dx$. The bias $V_t-V$, where $V_t$ is the new (random and time-dependent) potential function, acts only on some coordinates of the system, and is designed to flatten the corresponding empirical occupation measure of the diffusion $X$ in the large time regime. The diffusion process writes $dX_t=-\nabla V_t(X_t)dt+dW_t$, where the bias $V_t-V$ is function of the key quantity $\overline{\mu}t$: a probability occupation measure which depends on the past of the process, {\it i.e.} on $(X_s){s\in [0,t]}$. We are thus dealing with a self-interacting diffusion. In this note, we prove that when $t$ goes to infinity, $\overline{\mu}_t$ almost surely converges to $\mu$. Moreover, the approach is justified by the convergence of the bias to a limit which has an intepretation in terms of a free energy. The main argument is a change of variables, which formally validates the consistency of the approach. The convergence is then rigorously proven adapting the ODE method from stochastic approximation.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.