2000 character limit reached
Automatic Adaptation Rule Optimization via Large Language Models (2407.02203v1)
Published 2 Jul 2024 in cs.CL and cs.AI
Abstract: Rule-based adaptation is a foundational approach to self-adaptation, characterized by its human readability and rapid response. However, building high-performance and robust adaptation rules is often a challenge because it essentially involves searching the optimal design in a complex (variables) space. In response, this paper attempt to employ LLMs as a optimizer to construct and optimize adaptation rules, leveraging the common sense and reasoning capabilities inherent in LLMs. Preliminary experiments conducted in SWIM have validated the effectiveness and limitation of our method.
- T. Zhao, W. Zhang, H. Zhao, and Z. Jin, “A reinforcement learning-based framework for the generation and evolution of adaptation rules,” in ICAC, 2017.
- N. Hollmann, S. Müller, and F. Hutter, “Large language models for automated data science: Introducing CAAFE for context-aware automated feature engineering,” in NeurIPS, 2023.
- C. Yang, X. Wang, Y. Lu, H. Liu, Q. V. Le, D. Zhou, and X. Chen, “Large language models as optimizers,” in ICLR, 2024.
- G. A. Moreno, B. Schmerl, and D. Garlan, “Swim: An exemplar for evaluation and comparison of self-adaptation approaches for web applications,” in SEAMS, 2018.
- J. Cai, J. Xu, J. Li, T. Yamauchi, H. Iba, and K. Tei, “Exploring the improvement of evolutionary computation via large language models,” in GECCO, 2024.