Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 88 tok/s Pro
Kimi K2 138 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

A Unified Bregman Alternating Minimization Algorithm for Generalized DC Programming with Application to Imaging Data (2209.07323v3)

Published 15 Sep 2022 in math.OC

Abstract: In this paper, we consider a class of nonconvex (not necessarily differentiable) optimization problems called generalized DC (Difference-of-Convex functions) programming, which is minimizing the sum of two separable DC parts and one two-block-variable coupling function. To circumvent the nonconvexity and nonseparability of the problem under consideration, we accordingly introduce a Unified Bregman Alternating Minimization Algorithm (UBAMA) by maximally exploiting the favorable DC structure of the objective. Specifically, we first follow the spirit of alternating minimization to update each block variable in a sequential order, which can efficiently tackle the nonseparablitity caused by the coupling function. Then, we employ the Fenchel-Young inequality to approximate the second DC components (i.e., concave parts) so that each subproblem reduces to a convex optimization problem, thereby alleviating the computational burden of the nonconvex DC parts. Moreover, each subproblem absorbs a Bregman proximal regularization term, which is usually beneficial for inducing closed-form solutions of subproblems for many cases via choosing appropriate Bregman kernel functions. It is remarkable that our algorithm not only provides an algorithmic framework to understand the iterative schemes of some novel existing algorithms, but also enjoys implementable schemes with easier subproblems than some state-of-the-art first-order algorithms developed for generic nonconvex and nonsmooth optimization problems. Theoretically, we prove that the sequence generated by our algorithm globally converges to a critical point under the Kurdyka-{\L}ojasiewicz (K{\L}) condition. Besides, we estimate the local convergence rates of our algorithm when we further know the prior information of the K{\L} exponent.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube