An Information-Theoretic View of Stochastic Localization
Abstract: Given a probability measure $\mu$ over ${\mathbb R}n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stochastic localization argument to prove a general decomposition result of this type. In Eldan's theorem, the number of components' is characterized by the entropy of the mixture, andcloseness to product' is characterized by the covariance matrix of each component. We present an elementary proof of Eldan's theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.'
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.