Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Inexact Bregman Proximal Gradient Method and its Inertial Variant with Absolute and Partial Relative Stopping Criteria (2109.05690v5)

Published 13 Sep 2021 in math.OC

Abstract: The Bregman proximal gradient method (BPGM), which uses the Bregman distance as a proximity measure in the iterative scheme, has recently been re-developed for minimizing convex composite problems without the global Lipschitz gradient continuity assumption. This makes the BPGM appealing for a wide range of applications, and hence it has received growing attention in recent years. However, most existing convergence results are only obtained under the assumption that the involved subproblems are solved exactly, which is unrealistic in many applications and limits the applicability of the BPGM. To make the BPGM implementable and practical, in this paper, we develop inexact versions of the BPGM (denoted by iBPGM) by employing either an absolute-type stopping criterion or a partial relative-type stopping criterion for solving the subproblems. The $\mathcal{O}(1/k)$ convergence rate and the convergence of the sequence are also established for our iBPGM under some conditions. Moreover, we develop an inertial variant of our iBPGM (denoted by v-iBPGM) and establish the $\mathcal{O}(1/k{\gamma})$ convergence rate, where $\gamma\geq1$ is a restricted relative smoothness exponent depending on the smooth function in the objective and the kernel function. Specially, when the smooth function in the objective has a Lipschitz continuous gradient and the kernel function is strongly convex, we have $\gamma=2$ and thus the v-iBPGM improves the convergence rate of the iBPGM from $\mathcal{O}(1/k)$ to $\mathcal{O}(1/k2)$, in accordance with the existing results on the exact accelerated BPGM. Finally, some preliminary numerical experiments for solving the discrete quadratic regularized optimal transport problem are conducted to illustrate the convergence behaviors of our iBPGM and v-iBPGM under different inexactness settings.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (2)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube