LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization (2508.11871v1)
Abstract: Constrained multi-objective optimization problems (CMOPs) frequently arise in real-world applications where multiple conflicting objectives must be optimized under complex constraints. Existing dual-population two-stage algorithms have shown promise by leveraging infeasible solutions to improve solution quality. However, designing high-performing constrained multi-objective evolutionary algorithms (CMOEAs) remains a challenging task due to the intricacy of algorithmic components. Meanwhile, LLMs offer new opportunities for assisting with algorithm design; however, their effective integration into such tasks remains underexplored. To address this gap, we propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework. In Stage 1, the algorithm identifies both the constrained Pareto front (CPF) and the unconstrained Pareto front (UPF). In Stage 2, it performs targeted optimization using a combination of hybrid operators (HOps), an epsilon-based constraint-handling method, and a classification-based UPF-CPF relationship strategy, along with a dynamic resource allocation (DRA) mechanism. To reduce design complexity, the core modules, including HOps, epsilon decay function, and DRA, are decoupled and designed through prompt template engineering and LLM-human interaction. Experimental results on six benchmark test suites and ten real-world CMOPs demonstrate that LLM4CMO outperforms eleven state-of-the-art baseline algorithms. Ablation studies further validate the effectiveness of the LLM-aided modular design. These findings offer preliminary evidence that LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms. The code associated with this article is available at https://anonymous.4open.science/r/LLM4CMO971.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.