On the convergence rate of the boosted Difference-of-Convex Algorithm (DCA) (2510.16569v1)
Abstract: The difference-of-convex algorithm (DCA) is a well-established nonlinear programming technique that solves successive convex optimization problems. These sub-problems are obtained from the difference-of-convex~(DC) decompositions of the objective and constraint functions. We investigate the worst-case performance of the unconstrained DCA, with and without boosting, where boosting simply performs an additional step in the direction generated by the usual DCA method. We show that, for certain classes of DC decompositions, the boosted DCA is provably better in the worst-case than the usual DCA. While several numerical studies have reported that boosted DCA outperforms classical DCA, a theoretical explanation for this behavior has, to the best of our knowledge, not been given until now. Our proof technique relies on semidefinite programming (SDP) performance estimation.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.