Multilevel-Langevin pathwise average for Gibbs approximation (2109.07753v2)
Abstract: We propose and study a new multilevel method for the numerical approximation of a Gibbs distribution $\pi$ on $\mathbb{R}d$, based on (overdamped) Langevin diffusions. This method inspired by \cite{mainPPlangevin} and \cite{giles_szpruch_invariant} relies on a multilevel occupation measure, $i.e.$ on an appropriate combination of $R$ occupation measures of (constant-step) Euler schemes with respective steps $\gamma_r = \gamma_0 2{-r}$, $r=0,\ldots,R$. We first state a quantitative result under general assumptions which guarantees an \textit{$\varepsilon$-approximation} (in a $L2$-sense) with a cost of the order $\varepsilon{-2}$ or $\varepsilon{-2}|\log \varepsilon|3$ under less contractive assumptions. We then apply it to overdamped Langevin diffusions with strongly convex potential $U:\mathbb{R}d\rightarrow\mathbb{R}$ and obtain an \textit{$\varepsilon$-complexity} of the order ${\cal O}(d\varepsilon{-2}\log3(d\varepsilon{-2}))$ or ${\cal O}(d\varepsilon{-2})$ under additional assumptions on $U$. More precisely, up to universal constants, an appropriate choice of the parameters leads to a cost controlled by ${(\bar{\lambda}_U\vee 1)2}{\underline{\lambda}_U{-3}} d\varepsilon{-2}$ (where $\bar{\lambda}_U$ and $\underline{\lambda}_U$ respectively denote the supremum and the infimum of the largest and lowest eigenvalue of $D2U$). We finally complete these theoretical results with some numerical illustrations including comparisons to other algorithms in Bayesian learning and opening to non strongly convex setting.
- Maxime Egéa (2 papers)
- Fabien Panloup (25 papers)