Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization (1905.10188v1)
Abstract: This paper focuses on stochastic proximal gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer and convex constraints. To the best of our knowledge we present the first non-asymptotic convergence results for this class of problem. We present two simple stochastic proximal gradient algorithms, for general stochastic and finite-sum optimization problems, which have the same or superior convergence complexities compared to the current best results for the unconstrained problem setting. In a numerical experiment we compare our algorithms with the current state-of-the-art deterministic algorithm and find our algorithms to exhibit superior convergence.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.