Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Autonomous Multi-Objective Optimization Using Large Language Model (2406.08987v2)

Published 13 Jun 2024 in cs.NE

Abstract: Multi-objective optimization problems (MOPs) are ubiquitous in real-world applications, presenting a complex challenge of balancing multiple conflicting objectives. Traditional evolutionary algorithms (EAs), though effective, often rely on domain-specific expertise and iterative fine-tuning, hindering adaptability to unseen MOPs. In recent years, the advent of LLMs has revolutionized software engineering by enabling the autonomous generation and refinement of programs. Leveraging this breakthrough, we propose a new LLM-based framework that autonomously designs EA operators for solving MOPs. The proposed framework includes a robust testing module to refine the generated EA operator through error-driven dialogue with LLMs, a dynamic selection strategy along with informative prompting-based crossover and mutation to fit textual optimization pipeline. Our approach facilitates the design of EA operators without the extensive demands for expert intervention, thereby speeding up the innovation of EA operators. Empirical studies across various MOP categories validate the robustness and superior performance of our proposed framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuxiao Huang (10 papers)
  2. Shenghao Wu (6 papers)
  3. Wenjie Zhang (138 papers)
  4. Jibin Wu (42 papers)
  5. Liang Feng (59 papers)
  6. Kay Chen Tan (83 papers)
Citations (2)