A Proximal Stochastic Gradient Method with Adaptive Step Size and Variance Reduction for Convex Composite Optimization (2509.11043v1)
Abstract: In this paper, we propose a proximal stochasitc gradient algorithm (PSGA) for solving composite optimization problems by incorporating variance reduction techniques and an adaptive step-size strategy. In the PSGA method, the objective function consists of two components: one is a smooth convex function, and the other is a non-smooth convex function. We establish the strong convergence of the proposed method, provided that the smooth convex function is Lipschitz continuous. We also prove that the expected value of the error between the estimated gradient and the actual gradient converges to zero. Furthermore, we get an ( O(\sqrt{1/k}) ) convergence rate for our method. Finally, the effectiveness of the proposed method is validated through numerical experiments on Logistic regression and Lasso regression.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.