Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 19 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 226 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Convergence rate analysis of a sequential convex programming method with line search for a class of constrained difference-of-convex optimization problems (2001.06998v2)

Published 20 Jan 2020 in math.OC

Abstract: In this paper, we study the sequential convex programming method with monotone line search (SCP${ls}$) in [46] for a class of difference-of-convex (DC) optimization problems with multiple smooth inequality constraints. The SCP${ls}$ is a representative variant of moving-ball-approximation-type algorithms [6,10,13,54] for constrained optimization problems. We analyze the convergence rate of the sequence generated by SCP${ls}$ in both nonconvex and convex settings by imposing suitable Kurdyka-Lojasiewicz (KL) assumptions. Specifically, in the nonconvex settings, we assume that a special potential function related to the objective and the constraints is a KL function, while in the convex settings we impose KL assumptions directly on the extended objective function (i.e., sum of the objective and the indicator function of the constraint set). A relationship between these two different KL assumptions is established in the convex settings under additional differentiability assumptions. We also discuss how to deduce the KL exponent of the extended objective function from its Lagrangian in the convex settings, under additional assumptions on the constraint functions. Thanks to this result, the extended objectives of some constrained optimization models such as minimizing $\ell_1$ subject to logistic/Poisson loss are found to be KL functions with exponent $\frac12$ under mild assumptions. To illustrate how our results can be applied, we consider SCP${ls}$ for minimizing $\ell_{1-2}$ [60] subject to residual error measured by $\ell_2$ norm/Lorentzian norm [21]. We first discuss how the various conditions required in our analysis can be verified, and then perform numerical experiments to illustrate the convergence behaviors of SCP$_{ls}$.

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube