Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Capped Lp approximations for the composite L0 regularization problem (1707.07787v1)

Published 25 Jul 2017 in math.OC

Abstract: The composite L0 function serves as a sparse regularizer in many applications. The algorithmic difficulty caused by the composite L0 regularization (the L0 norm composed with a linear mapping) is usually bypassed through approximating the L0 norm. We consider in this paper capped Lp approximations with $p>0$ for the composite L0 regularization problem. For each $p>0$, the capped Lp function converges to the L0 norm pointwisely as the approximation parameter tends to infinity. We point out that the capped Lp approximation problem is essentially a penalty method with an Lp penalty function for the composite L0 problem from the viewpoint of numerical optimization. Our theoretical results stated below may shed a new light on the penalty methods for solving the composite L0 problem and help the design of innovative numerical algorithms. We first establish the existence of optimal solutions to the composite L0 regularization problem and its capped Lp approximation problem under conditions that the data fitting function is asymptotically level stable and bounded below. Asymptotically level stable functions cover a rich class of data fitting functions encountered in practice. We then prove that the capped Lp problem asymptotically approximates the composite L0 problem if the data fitting function is a level bounded function composed with a linear mapping. We further show that if the data fitting function is the indicator function on an asymptotically linear set or the L0 norm composed with an affine mapping, then the composite L0 problem and its capped Lp approximation problem share the same optimal solution set provided that the approximation parameter is large enough.

Summary

We haven't generated a summary for this paper yet.