Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convergence of a first-order consensus-based global optimization algorithm (1910.08239v1)

Published 18 Oct 2019 in math.OC and math.DS

Abstract: Global optimization of a non-convex objective function often appears in large-scale machine-learning and artificial intelligence applications. Recently, consensus-based optimization (in short CBO) methods have been introduced as one of the gradient-free optimization methods. In this paper, we provide a convergence analysis for the first-order CBO method in \cite{C-J-L-Z}. Prior to the current work, the convergence study was carried out for CBO methods on corresponding mean-field limit, a Fokker-Planck equation, which does not imply the convergence of the CBO method {\it per se}. Based on the consensus estimate directly on the first-order CBO model, we provide a convergence analysis of the first-order CBO method \cite{C-J-L-Z} without resorting to the corresponding mean-field model. Our convergence analysis consists of two steps. In the first step, we show that the CBO model exhibits a global consensus time asymptotically for any initial data, and in the second step, we provide a sufficient condition on system parameters--which is dimension independent-- and initial data which guarantee that the converged consensus state lies in a small neighborhood of the global minimum almost surely.

Summary

We haven't generated a summary for this paper yet.