A Simple Parallel and Distributed Sampling Technique: Local Glauber Dynamics (1802.06676v2)
Abstract: \emph{Sampling} constitutes an important tool in a variety of areas: from machine learning and combinatorial optimization to computational physics and biology. A central class of sampling algorithms is the \emph{Markov Chain Monte Carlo} method, based on the construction of a Markov chain with the desired sampling distribution as its stationary distribution. Many of the traditional Markov chains, such as the \emph{Glauber dynamics}, do not scale well with increasing dimension. To address this shortcoming, we propose a simple local update rule based on the Glauber dynamics that leads to efficient parallel and distributed algorithms for sampling from Gibbs distributions. Concretely, we present a Markov chain that mixes in $O(\log n)$ rounds when Dobrushin's condition for the Gibbs distribution is satisfied. This improves over the \emph{LubyGlauber} algorithm by Feng, Sun, and Yin [PODC'17], which needs $O(\Delta \log n)$ rounds, and their \emph{LocalMetropolis} algorithm, which converges in $O(\log n)$ rounds but requires a considerably stronger mixing condition. Here, $n$ denotes the number of nodes in the graphical model inducing the Gibbs distribution, and $\Delta$ its maximum degree. In particular, our method can sample a uniform proper coloring with $\alpha \Delta$ colors in $O(\log n)$ rounds for any $\alpha>2$, which almost matches the threshold of the sequential Glauber dynamics and improves on the $\alpha>2 +\sqrt{2}$ threshold of Feng et al.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.