Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm (1201.4615v3)

Published 23 Jan 2012 in cs.IT, math.IT, and math.OC

Abstract: This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of $||x||1+1/(2\alpha)||x||_22$, where $x$ is a vector, as well as the minimization of $||X||+1/(2\alpha)||X||F2$, where $X$ is a matrix and $||X||$ and $||X||F$ are the nuclear and Frobenius norms of $X$, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing $||x||_1$ and $||X||$ under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector $x0$, minimizing $||x||1+1/(2\alpha)||x||_22$ returns (nearly) the same solution as minimizing $||x||_1$ almost whenever $\alpha\ge 10||x0||\infty$. The same relation also holds between minimizing $||X||_+1/(2\alpha)||X||F2$ and minimizing $||X||*$ for recovering a (nearly) low-rank matrix $X0$, if $\alpha\ge 10||X0||_2$. Furthermore, we show that the linearized Bregman algorithm for minimizing $||x||_1+1/(2\alpha)||x||_22$ subject to $Ax=b$ enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on $A$. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.

Citations (105)

Summary

We haven't generated a summary for this paper yet.