Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automatic Adaptation Rule Optimization via Large Language Models (2407.02203v1)

Published 2 Jul 2024 in cs.CL and cs.AI

Abstract: Rule-based adaptation is a foundational approach to self-adaptation, characterized by its human readability and rapid response. However, building high-performance and robust adaptation rules is often a challenge because it essentially involves searching the optimal design in a complex (variables) space. In response, this paper attempt to employ LLMs as a optimizer to construct and optimize adaptation rules, leveraging the common sense and reasoning capabilities inherent in LLMs. Preliminary experiments conducted in SWIM have validated the effectiveness and limitation of our method.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (5)
  1. T. Zhao, W. Zhang, H. Zhao, and Z. Jin, “A reinforcement learning-based framework for the generation and evolution of adaptation rules,” in ICAC, 2017.
  2. N. Hollmann, S. Müller, and F. Hutter, “Large language models for automated data science: Introducing CAAFE for context-aware automated feature engineering,” in NeurIPS, 2023.
  3. C. Yang, X. Wang, Y. Lu, H. Liu, Q. V. Le, D. Zhou, and X. Chen, “Large language models as optimizers,” in ICLR, 2024.
  4. G. A. Moreno, B. Schmerl, and D. Garlan, “Swim: An exemplar for evaluation and comparison of self-adaptation approaches for web applications,” in SEAMS, 2018.
  5. J. Cai, J. Xu, J. Li, T. Yamauchi, H. Iba, and K. Tei, “Exploring the improvement of evolutionary computation via large language models,” in GECCO, 2024.

Summary

We haven't generated a summary for this paper yet.