Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An incremental descent method for multi-objective optimization (2105.11845v1)

Published 25 May 2021 in math.OC and cs.CC

Abstract: Current state-of-the-art multi-objective optimization solvers, by computing gradients of all $m$ objective functions per iteration, produce after $k$ iterations a measure of proximity to critical conditions that is upper-bounded by $O(1/\sqrt{k})$ when the objective functions are assumed to have $L-$Lipschitz continuous gradients; i.e. they require $O(m/\epsilon2)$ gradient and function computations to produce a measure of proximity to critical conditions bellow some target $\epsilon$. We reduce this to $O(1/\epsilon2)$ with a method that requires only a constant number of gradient and function computations per iteration; and thus, we obtain for the first time a multi-objective descent-type method with a query complexity cost that is unaffected by increasing values of $m$. For this, a brand new multi-objective descent direction is identified, which we name the \emph{central descent direction}, and, an incremental approach is proposed. Robustness properties of the central descent direction are established, measures of proximity to critical conditions are derived, and, the incremental strategy for finding solutions to the multi-objective problem is shown to attain convergence properties unattained by previous methods. To the best of our knowledge, this is the first method to achieve this with no additional a-priori information on the structure of the problem, such as done by scalarizing techniques, and, with no pre-known information on the regularity of the objective functions other than Lipschitz continuity of the gradients.

Summary

We haven't generated a summary for this paper yet.