Reimagining Self-Adaptation in the Age of Large Language Models (2404.09866v1)
Abstract: Modern software systems are subjected to various types of uncertainties arising from context, environment, etc. To this end, self-adaptation techniques have been sought out as potential solutions. Although recent advances in self-adaptation through the use of ML techniques have demonstrated promising results, the capabilities are limited by constraints imposed by the ML techniques, such as the need for training samples, the ability to generalize, etc. Recent advancements in Generative AI (GenAI) open up new possibilities as it is trained on massive amounts of data, potentially enabling the interpretation of uncertainties and synthesis of adaptation strategies. In this context, this paper presents a vision for using GenAI, particularly LLMs, to enhance the effectiveness and efficiency of architectural adaptation. Drawing parallels with human operators, we propose that LLMs can autonomously generate similar, context-sensitive adaptation strategies through its advanced natural language processing capabilities. This method allows software systems to understand their operational state and implement adaptations that align with their architectural requirements and environmental changes. By integrating LLMs into the self-adaptive system architecture, we facilitate nuanced decision-making that mirrors human-like adaptive reasoning. A case study with the SWIM exemplar system provides promising results, indicating that LLMs can potentially handle different adaptation scenarios. Our findings suggest that GenAI has significant potential to improve software systems' dynamic adaptability and resilience.
- O. Gheibi, D. Weyns, and F. Quin, “Applying machine learning in self-adaptive systems: A systematic literature review,” ACM Trans. Auton. Adapt. Syst., vol. 15, no. 3, aug 2021. [Online]. Available: https://doi.org/10.1145/3469440
- X. Hou, Y. Zhao, Y. Liu, Z. Yang, K. Wang, L. Li, X. Luo, D. Lo, J. Grundy, and H. Wang, “Large language models for software engineering: A systematic literature review,” arXiv preprint arXiv:2308.10620, 2023.
- Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang, “Autogen: Enabling next-gen llm applications via multi-agent conversation,” Microsoft, Tech. Rep., August 2023.
- J. Kephart and D. Chess, “The vision of autonomic computing,” Computer, vol. 36, no. 1, pp. 41–50, Jan 2003.
- T. Bureš, “Self-adaptation 2.0,” in 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), May 2021, pp. 262–263.
- G. A. Moreno, B. Schmerl, and D. Garlan, “Swim: an exemplar for evaluation and comparison of self-adaptation approaches for web applications,” in Proceedings of the 13th International Conference on Software Engineering for Adaptive and Self-Managing Systems, ser. SEAMS ’18. New York, NY, USA: Association for Computing Machinery, 2018, p. 137–143. [Online]. Available: https://doi.org/10.1145/3194133.3194163
- D. Kim and S. Park, “Reinforcement learning-based dynamic adaptation planning method for architecture-based self-managed software,” in 2009 ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems, May 2009, pp. 76–85.
- H. N. Ho and E. Lee, “Model-based reinforcement learning approach for planning in self-adaptive software system,” in Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication, ser. IMCOM ’15. New York, NY, USA: Association for Computing Machinery, 2015. [Online]. Available: https://doi.org/10.1145/2701126.2701191
- J. Cámara, H. Muccini, and K. Vaidhyanathan, “Quantitative verification-aided machine learning: A tandem approach for architecting self-adaptive iot systems,” in 2020 IEEE International Conference on Software Architecture (ICSA), March 2020, pp. 11–22.
- O. Gheibi and D. Weyns, “Lifelong self-adaptation: self-adaptation meets lifelong machine learning,” in Proceedings of the 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems, ser. SEAMS ’22. New York, NY, USA: Association for Computing Machinery, 2022, p. 1–12. [Online]. Available: https://doi.org/10.1145/3524844.3528052
- G. Xiao, Y. Tian, B. Chen, S. Han, and M. Lewis, “Efficient streaming language models with attention sinks,” 2023.
- Raghav Donakanti (3 papers)
- Prakhar Jain (4 papers)
- Shubham Kulkarni (4 papers)
- Karthik Vaidhyanathan (23 papers)