Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Software Model Evolution with Large Language Models: Experiments on Simulated, Public, and Industrial Datasets (2406.17651v5)

Published 25 Jun 2024 in cs.SE and cs.AI

Abstract: Modeling structure and behavior of software systems plays a crucial role in the industrial practice of software engineering. As with other software engineering artifacts, software models are subject to evolution. Supporting modelers in evolving software models with recommendations for model completions is still an open problem, though. In this paper, we explore the potential of LLMs for this task. In particular, we propose an approach, RAMC, leveraging LLMs, model histories, and retrieval-augmented generation for model completion. Through experiments on three datasets, including an industrial application, one public open-source community dataset, and one controlled collection of simulated model repositories, we evaluate the potential of LLMs for model completion with RAMC. We found that LLMs are indeed a promising technology for supporting software model evolution (62.30% semantically correct completions on real-world industrial data and up to 86.19% type-correct completions). The general inference capabilities of LLMs are particularly useful when dealing with concepts for which there are few, noisy, or no examples at all.

Citations (1)

Summary

We haven't generated a summary for this paper yet.