Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models as Evolutionary Optimizers (2310.19046v3)

Published 29 Oct 2023 in cs.NE

Abstract: Evolutionary algorithms (EAs) have achieved remarkable success in tackling complex combinatorial optimization problems. However, EAs often demand carefully-designed operators with the aid of domain expertise to achieve satisfactory performance. In this work, we present the first study on LLMs as evolutionary combinatorial optimizers. The main advantage is that it requires minimal domain knowledge and human efforts, as well as no additional training of the model. This approach is referred to as LLM-driven EA (LMEA). Specifically, in each generation of the evolutionary search, LMEA instructs the LLM to select parent solutions from current population, and perform crossover and mutation to generate offspring solutions. Then, LMEA evaluates these new solutions and include them into the population for the next generation. LMEA is equipped with a self-adaptation mechanism that controls the temperature of the LLM. This enables it to balance between exploration and exploitation and prevents the search from getting stuck in local optima. We investigate the power of LMEA on the classical traveling salesman problems (TSPs) widely used in combinatorial optimization research. Notably, the results show that LMEA performs competitively to traditional heuristics in finding high-quality solutions on TSP instances with up to 20 nodes. Additionally, we also study the effectiveness of LLM-driven crossover/mutation and the self-adaptation mechanism in evolutionary search. In summary, our results reveal the great potentials of LLMs as evolutionary optimizers for solving combinatorial problems. We hope our research shall inspire future explorations on LLM-driven EAs for complex optimization challenges.

LLMs as Evolutionary Optimizers

The paper "LLMs as Evolutionary Optimizers" introduces a novel approach that leverages LLMs to enhance the process of solving combinatorial optimization problems, exemplified by the Traveling Salesman Problem (TSP). The core concept revolves around adapting LLMs to perform as evolutionary optimization tools, a method previously reliant on carefully designed operators and substantial domain expertise. This approach, called LLM-driven EA (LMEA), suggests that LLMs can be incorporated into Evolutionary Algorithms (EAs) to automate complex optimization tasks, requiring minimal domain knowledge and no additional training.

Methodology and Framework

The framework proposed, LMEA, operates by embedding LLMs within an EA framework. Traditional EAs work by simulating biological evolution principles, employing strategies such as selection, crossover, and mutation to improve solutions iteratively. LMEA adapts this paradigm by harnessing the natural language processing capabilities of LLMs to select parent solutions, execute crossover, and apply mutations, thereby generating offspring solutions.

A significant feature of LMEA is its self-adaptation mechanism regarding LLM temperature, which controls the balance between exploration and exploitation in the search process. This parameter adjustment helps mitigate the risk of the algorithm becoming trapped in local optima.

Experimental Evaluation

The paper evaluates LMEA using classical TSP instances with varying node counts, comparing its effectiveness to traditional heuristics and a contemporary technique called Optimization by Prompting (OPRO). The results demonstrate LMEA's competitive performance, notably finding optimal solutions consistently in smaller instances (up to 15 nodes). This capability showcases LMEA's potential for efficiently addressing NP-hard combinatorial optimization problems, though its scalability presents challenges with larger instance sizes.

The experimental section also scrutinizes the utility of LLM-driven genetic operators and finds that LMEA surpasses OPRO in performance, underscoring the benefits of integrating LLM-driven evolutionary strategies. Furthermore, the inclusion of self-adaptation mechanisms noticeably enhances LMEA's optimization performance, as evidenced by comparative analyses with a variant lacking this feature.

Implications and Future Directions

The paper's findings suggest significant implications for the field of AI-driven optimization. LLMs have shown promise not only in language tasks but also as tools capable of leveraging vast knowledge for problem-solving in optimization contexts. While the approach is novel and presents considerable potential, scalability remains an area for improvement. Future research could explore strategies to handle larger problems more efficiently, such as leveraging fine-tuned smaller models or evolving applications to other combinatorial problems.

Moreover, integrating advanced prompt engineering techniques could further refine LMEA's performance, particularly in enhancing the explanation capabilities and output efficacy in diverse problem scenarios. This investigation into LLMs as evolutionary optimizers opens new pathways for AI's role in challenging optimization paradigms, pointing towards a future where less manual intervention and more intuitive AI applications revolutionize the optimization landscape.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Shengcai Liu (40 papers)
  2. Caishun Chen (7 papers)
  3. Xinghua Qu (17 papers)
  4. Ke Tang (107 papers)
  5. Yew-Soon Ong (105 papers)
Citations (53)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com