Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sparse Solutions of a Class of Constrained Optimization Problems (1907.00880v5)

Published 1 Jul 2019 in math.OC and stat.ML

Abstract: In this paper, we consider a well-known sparse optimization problem that aims to find a sparse solution of a possibly noisy underdetermined system of linear equations. Mathematically, it can be modeled in a unified manner by minimizing $|\bf{x}|_pp$ subject to $|A\bf{x}-\bf{b}|_q\leq\sigma$ for given $A \in \mathbb{R}{m \times n}$, $\bf{b}\in\mathbb{R}m$, $\sigma \geq0$, $0\leq p\leq 1$ and $q \geq 1$. We then study various properties of the optimal solutions of this problem. Specifically, without any condition on the matrix $A$, we provide upper bounds in cardinality and infinity norm for the optimal solutions, and show that all optimal solutions must be on the boundary of the feasible set when $0<p<1$. Moreover, for $q \in {1,\infty}$, we show that the problem with $0<p<1$ has a finite number of optimal solutions and prove that there exists $0<p*<1$ such that the solution set of the problem with any $0<p<p*$ is contained in the solution set of the problem with $p=0$ and there further exists $0<\bar{p}<p*$ such that the solution set of the problem with any $0<p\leq\bar{p}$ remains unchanged. An estimation of such $p*$ is also provided. In addition, to solve the constrained nonconvex non-Lipschitz $L_p$-$L_1$ problem ($0<p<1$ and $q=1$), we propose a smoothing penalty method and show that, under some mild conditions, any cluster point of the sequence generated is a KKT point of our problem. Some numerical examples are given to implicitly illustrate the theoretical results and show the efficiency of the proposed algorithm for the constrained $L_p$-$L_1$ problem under different noises.

Citations (4)

Summary

We haven't generated a summary for this paper yet.