Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Eldan's Stochastic Localization and the KLS Conjecture: Isoperimetry, Concentration and Mixing (1612.01507v3)

Published 5 Dec 2016 in math.FA, cs.CG, cs.DS, math.MG, and math.PR

Abstract: We show that the Cheeger constant for $n$-dimensional isotropic logconcave measures is $O(n{1/4})$, improving on the previous best bound of $O(n{1/3}\sqrt{\log n}).$ As corollaries we obtain the same improved bound on the thin-shell estimate, Poincar\'{e} constant and Lipschitz concentration constant and an alternative proof of this bound for the isotropic (slicing) constant; it also follows that the ball walk for sampling from an isotropic logconcave density in ${\bf R}{n}$ converges in $O{*}(n{2.5})$ steps from a warm start. The proof is based on gradually transforming any logconcave density to one that has a significant Gaussian factor via a Martingale process. Extending this proof technique, we prove that the log-Sobolev constant of any isotropic logconcave density in ${\bf R}{n}$ with support of diameter $D$ is $\Omega(1/D)$, resolving a question posed by Frieze and Kannan in 1997. This is asymptotically the best possible estimate and improves on the previous bound of $\Omega(1/D{2})$ by Kannan-Lov\'{a}sz-Montenegro. It follows that for any isotropic logconcave density, the ball walk with step size $\delta=\Theta(1/\sqrt{n})$ mixes in $O\left(n{2}D\right)$ proper steps from \emph{any }starting point. This improves on the previous best bound of $O(n{2}D{2})$ and is also asymptotically tight. The new bound leads to the following large deviation inequality for an $L$-Lipschitz function $g$ over an isotropic logconcave density $p$: for any $t>0$, [ Pr_{x\sim p}\left(\left|g(x)-\bar{g}\right|\geq L\cdot t\right)\leq\exp(-\frac{c\cdot t{2}}{t+\sqrt{n}}) ] where $\bar{g}$ is the median or mean of $g$ for $x\sim p$; this generalizes and improves on previous bounds by Paouris and by Guedon-Milman. The technique also bounds the ``small ball'' probability in terms of the Cheeger constant, and recovers the current best bound.

Citations (92)

Summary

We haven't generated a summary for this paper yet.