Can Large Language Models Be Trusted as Evolutionary Optimizers for Network-Structured Combinatorial Problems? (2501.15081v2)
Abstract: LLMs have shown impressive capabilities in language understanding and reasoning across diverse domains. Recently, there has been increasing interests in utilizing LLMs not merely as assistants in optimization tasks, but as active optimizers, particularly for network-structured combinatorial problems. However, before LLMs can be reliably deployed in this role, a fundamental question must be addressed: Can LLMs iteratively manipulate solutions that consistently adhere to problem constraints? In this work, we propose a systematic framework to evaluate the capacity of LLMs to engage with problem structures. Rather than treating the model as a black-box generator, we adopt the commonly used evolutionary operators as optimizer and propose a comprehensive evaluation framework that rigorously assesses the output fidelity of LLM-generated operators across different stages of the evolutionary process. To enhance robustness, we introduce a hybrid error-correction mechanism that mitigates uncertainty in LLM outputs. Moreover, we develop a cost-efficient population-level optimization strategy that significantly improves efficiency compared to traditional individual-level approaches. Extensive experiments on a representative node-level combinatorial network optimization task demonstrate the effectiveness, adaptability, and inherent limitations of LLM-based operators. Our findings offer new perspectives on the integration of LLMs into evolutionary computation, providing practical insights for scalable optimization in networked systems.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.