Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 333 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

A Gradient Complexity Analysis for Minimizing the Sum of Strongly Convex Functions with Varying Condition Numbers (2208.06524v1)

Published 12 Aug 2022 in math.OC

Abstract: A popular approach to minimize a finite-sum of convex functions is stochastic gradient descent (SGD) and its variants. Fundamental research questions associated with SGD include: (i) To find a lower bound on the number of times that the gradient oracle of each individual function must be assessed in order to find an $\epsilon$-minimizer of the overall objective; (ii) To design algorithms which guarantee to find an $\epsilon$-minimizer of the overall objective in expectation at no more than a certain number of times (in terms of $1/\epsilon$) that the gradient oracle of each functions needs to be assessed (i.e., upper bound). If these two bounds are at the same order of magnitude, then the algorithms may be called optimal. Most existing results along this line of research typically assume that the functions in the objective share the same condition number. In this paper, the first model we study is the problem of minimizing the sum of finitely many strongly convex functions whose condition numbers are all different. We propose an SGD method for this model and show that it is optimal in gradient computations, up to a logarithmic factor. We then consider a constrained separate block optimization model, and present lower and upper bounds for its gradient computation complexity. Next, we propose to solve the Fenchel dual of the constrained block optimization model via the SGD we introduced earlier, and show that it yields a lower iteration complexity than solving the original model by the ADMM-type approach. Finally, we extend the analysis to the general composite convex optimization model, and obtain gradient-computation complexity results under certain conditions.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.