Thresholded Lasso for high dimensional variable selection and statistical estimation (1002.1583v2)
Abstract: Given $n$ noisy samples with $p$ dimensions, where $n \ll p$, we show that the multi-step thresholding procedure based on the Lasso -- we call it the {\it Thresholded Lasso}, can accurately estimate a sparse vector $\beta \in \Rp$ in a linear model $Y = X \beta + \epsilon$, where $X_{n \times p}$ is a design matrix normalized to have column $\ell_2$ norm $\sqrt{n}$, and $\epsilon \sim N(0, \sigma2 I_n)$. We show that under the restricted eigenvalue (RE) condition (Bickel-Ritov-Tsybakov 09), it is possible to achieve the $\ell_2$ loss within a logarithmic factor of the ideal mean square error one would achieve with an {\em oracle} while selecting a sufficiently sparse model -- hence achieving {\it sparse oracle inequalities}; the oracle would supply perfect information about which coordinates are non-zero and which are above the noise level. In some sense, the Thresholded Lasso recovers the choices that would have been made by the $\ell_0$ penalized least squares estimators, in that it selects a sufficiently sparse model without sacrificing the accuracy in estimating $\beta$ and in predicting $X \beta$. We also show for the Gauss-Dantzig selector (Cand`{e}s-Tao 07), if $X$ obeys a uniform uncertainty principle and if the true parameter is sufficiently sparse, one will achieve the sparse oracle inequalities as above, while allowing at most $s_0$ irrelevant variables in the model in the worst case, where $s_0 \leq s$ is the smallest integer such that for $\lambda = \sqrt{2 \log p/n}$, $\sum_{i=1}p \min(\beta_i2, \lambda2 \sigma2) \leq s_0 \lambda2 \sigma2$. Our simulation results on the Thresholded Lasso match our theoretical analysis excellently.