Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics (2405.20132v3)

Published 30 May 2024 in cs.NE and cs.AI

Abstract: LLMs such as GPT-4 have demonstrated their ability to understand natural language and generate complex code snippets. This paper introduces a novel LLM Evolutionary Algorithm (LLaMEA) framework, leveraging GPT models for the automated generation and refinement of algorithms. Given a set of criteria and a task definition (the search space), LLaMEA iteratively generates, mutates and selects algorithms based on performance metrics and feedback from runtime evaluations. This framework offers a unique approach to generating optimized algorithms without requiring extensive prior expertise. We show how this framework can be used to generate novel black-box metaheuristic optimization algorithms automatically. LLaMEA generates multiple algorithms that outperform state-of-the-art optimization algorithms (Covariance Matrix Adaptation Evolution Strategy and Differential Evolution) on the five dimensional black box optimization benchmark (BBOB). The algorithms also show competitive performance on the 10- and 20-dimensional instances of the test functions, although they have not seen such instances during the automated generation process. The results demonstrate the feasibility of the framework and identify future directions for automated generation and optimization of algorithms via LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Niki van Stein (31 papers)
  2. Thomas Bäck (121 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com