Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence rates analysis of a multiobjective proximal gradient method (2010.08217v5)

Published 16 Oct 2020 in math.OC

Abstract: Many descent algorithms for multiobjective optimization have been developed in the last two decades. Tanabe et al. (Comput Optim Appl 72(2):339--361, 2019) proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex ($O(\sqrt{1 / k})$), convex ($O(1 / k)$), and strongly convex ($O(rk)$ for some $r \in (0, 1)$) problems. We also extend the so-called Polyak-{\L}ojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities ($O(rk)$ for some $r \in (0, 1)$).

Summary

We haven't generated a summary for this paper yet.