Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary Hessian Learning: Forced Optimal Covariance Adaptive Learning (FOCAL) (1112.4454v1)

Published 19 Dec 2011 in cs.NE, cs.NA, and quant-ph

Abstract: The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has been the most successful Evolution Strategy at exploiting covariance information; it uses a form of Principle Component Analysis which, under certain conditions, is suggested to converge to the correct covariance matrix, formulated as the inverse of the mathematically well-defined Hessian matrix. However, in practice, there exist conditions where CMA-ES converges to the global optimum (accomplishing its primary goal) while it does not learn the true covariance matrix (missing an auxiliary objective), likely due to step-size deficiency. These circumstances can involve high-dimensional landscapes with large condition numbers. This paper introduces a novel technique entitled Forced Optimal Covariance Adaptive Learning (FOCAL), with the explicit goal of determining the Hessian at the global basin of attraction. It begins by introducing theoretical foundations to the inverse relationship between the learned covariance and the Hessian matrices. FOCAL is then introduced and demonstrated to retrieve the Hessian matrix with high fidelity on both model landscapes and experimental Quantum Control systems, which are observed to possess a non-separable, non-quadratic search landscape. The recovered Hessian forms are corroborated by physical knowledge of the systems. This study constitutes an example for Natural Computing successfully serving other branches of natural sciences, and introducing at the same time a powerful generic method for any high-dimensional continuous search seeking landscape information.

Citations (3)

Summary

We haven't generated a summary for this paper yet.