Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning from MOM's principles: Le Cam's approach (1701.01961v2)

Published 8 Jan 2017 in math.ST and stat.TH

Abstract: We obtain estimation error rates for estimators obtained by aggregation of regularized median-of-means tests, following a construction of Le Cam. The results hold with exponentially large probability -- as in the gaussian framework with independent noise- under only weak moments assumptions on data and without assuming independence between noise and design. Any norm may be used for regularization. When it has some sparsity inducing power we recover sparse rates of convergence. The procedure is robust since a large part of data may be corrupted, these outliers have nothing to do with the oracle we want to reconstruct. Our general risk bound is of order \begin{equation*} \max\left(\mbox{minimax rate in the i.i.d. setup}, \frac{\text{number of outliers}}{\text{number of observations}}\right) \enspace. \end{equation*}In particular, the number of outliers may be as large as (number of data) $\times$(minimax rate) without affecting this rate. The other data do not have to be identically distributed but should only have equivalent $L1$ and $L2$ moments. For example, the minimax rate $s \log(ed/s)/N$ of recovery of a $s$-sparse vector in $\mathbb{R}d$ is achieved with exponentially large probability by a median-of-means version of the LASSO when the noise has $q_0$ moments for some $q_0>2$, the entries of the design matrix should have $C_0\log(ed)$ moments and the dataset can be corrupted up to $C_1 s \log(ed/s)$ outliers.

Summary

We haven't generated a summary for this paper yet.