Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharp Bounds on the Runtime of the (1+1) EA via Drift Analysis and Analytic Combinatorial Tools (1906.09047v2)

Published 21 Jun 2019 in cs.NE, cs.DS, and math.PR

Abstract: The expected running time of the classical (1+1) EA on the OneMax benchmark function has recently been determined by Hwang et al. (2018) up to additive errors of $O((\log n)/n)$. The same approach proposed there also leads to a full asymptotic expansion with errors of the form $O(n{-K}\log n)$ for any $K>0$. This precise result is obtained by matched asymptotics with rigorous error analysis (or by solving asymptotically the underlying recurrences via inductive approximation arguments), ideas radically different from well-established techniques for the running time analysis of evolutionary computation such as drift analysis. This paper revisits drift analysis for the (1+1) EA on OneMax and obtains that the expected running time $E(T)$, starting from $\lceil n/2\rceil$ one-bits, is determined by the sum of inverse drifts up to logarithmic error terms, more precisely $$\sum_{k=1}{\lfloor n/2\rfloor}\frac{1}{\Delta(k)} - c_1\log n \le E(T) \le \sum_{k=1}{\lfloor n/2\rfloor}\frac{1}{\Delta(k)} - c_2\log n,$$ where $\Delta(k)$ is the drift (expected increase of the number of one-bits from the state of $n-k$ ones) and $c_1,c_2 >0$ are explicitly computed constants. This improves the previous asymptotic error known for the sum of inverse drifts from $\tilde{O}(n{2/3})$ to a logarithmic error and gives for the first time a non-asymptotic error bound. Using standard asymptotic techniques, the difference between $E(T)$ and the sum of inverse drifts is found to be $(e/2)\log n+O(1)$.

Citations (11)

Summary

We haven't generated a summary for this paper yet.