Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rate of convergence of the smoothed empirical Wasserstein distance (2205.02128v3)

Published 4 May 2022 in math.PR, cs.IT, math.IT, math.ST, and stat.TH

Abstract: Consider an empirical measure $\mathbb{P}n$ induced by $n$ iid samples from a $d$-dimensional $K$-subgaussian distribution $\mathbb{P}$ and let $\gamma = N(0,\sigma2 I_d)$ be the isotropic Gaussian measure. We study the speed of convergence of the smoothed Wasserstein distance $W_2(\mathbb{P}_n * \gamma, \mathbb{P}\gamma) = n{-\alpha + o(1)}$ with $$ being the convolution of measures. For $K<\sigma$ and in any dimension $d\ge 1$ we show that $\alpha = {1\over2}$. For $K>\sigma$ in dimension $d=1$ we show that the rate is slower and is given by $\alpha = {(\sigma2 + K2)2\over 4 (\sigma4 + K4)} < 1/2$. This resolves several open problems in [GGNWP20], and in particular precisely identifies the amount of smoothing $\sigma$ needed to obtain a parametric rate. In addition, for any $d$-dimensional $K$-subgaussian distribution $\mathbb{P}$, we also establish that $D{KL}(\mathbb{P}n * \gamma |\mathbb{P}*\gamma)$ has rate $O(1/n)$ for $K<\sigma$ but only slows down to $O({(\log n){d+1}\over n})$ for $K>\sigma$. The surprising difference of the behavior of $W_22$ and KL implies the failure of $T{2}$-transportation inequality when $\sigma < K$. Consequently, it follows that for $K>\sigma$ the log-Sobolev inequality (LSI) for the Gaussian mixture $\mathbb{P} * N(0, \sigma{2})$ cannot hold. This closes an open problem in [WW+16], who established the LSI under the condition $K<\sigma$ and asked if their bound can be improved.

Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com