Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Degrees of Freedom of Partly Smooth Regularizers (1404.5557v4)

Published 22 Apr 2014 in math.ST, cs.IT, math.IT, and stat.TH

Abstract: In this paper, we are concerned with regularized regression problems where the prior regularizer is a proper lower semicontinuous and convex function which is also partly smooth relative to a Riemannian submanifold. This encompasses as special cases several known penalties such as the Lasso ($\ell1$-norm), the group Lasso ($\ell1-\ell2$-norm), the $\ell\infty$-norm, and the nuclear norm. This also includes so-called analysis-type priors, i.e. composition of the previously mentioned penalties with linear operators, typical examples being the total variation or fused Lasso penalties.We study the sensitivity of any regularized minimizer to perturbations of the observations and provide its precise local parameterization.Our main sensitivity analysis result shows that the predictor moves locally stably along the same active submanifold as the observations undergo small perturbations. This local stability is a consequence of the smoothness of the regularizer when restricted to the active submanifold, which in turn plays a pivotal role to get a closed form expression for the variations of the predictor w.r.t. observations. We also show that, for a variety of regularizers, including polyhedral ones or the group Lasso and its analysis counterpart, this divergence formula holds Lebesgue almost everywhere.When the perturbation is random (with an appropriate continuous distribution), this allows us to derive an unbiased estimator of the degrees of freedom and of the risk of the estimator prediction.Our results hold true without requiring the design matrix to be full column rank.They generalize those already known in the literature such as the Lasso problem, the general Lasso problem (analysis $\ell1$-penalty), or the group Lasso where existing results for the latter assume that the design is full column rank.

Citations (49)

Summary

We haven't generated a summary for this paper yet.