Iteration complexity analysis of random coordinate descent methods for $\ell_0$ regularized convex problems
Abstract: In this paper we analyze a family of general random block coordinate descent methods for the minimization of $\ell_0$ regularized optimization problems, i.e. the objective function is composed of a smooth convex function and the $\ell_0$ regularization. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods. We analyze necessary optimality conditions for this nonconvex $\ell_0$ regularized problem and devise a separation of the set of local minima into restricted classes based on approximation versions of the objective function. We provide a unified analysis of the almost sure convergence for this family of block coordinate descent algorithms and prove that, for each approximation version, the limit points are local minima from the corresponding restricted class of local minimizers. Under the strong convexity assumption, we prove linear convergence in probability for our family of methods.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.