Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Technical Report on Neural Language Models and Few-Shot Learning for Systematic Requirements Processing in MDSE (2211.09084v1)

Published 16 Nov 2022 in cs.SE and cs.AI

Abstract: Systems engineering, in particular in the automotive domain, needs to cope with the massively increasing numbers of requirements that arise during the development process. To guarantee a high product quality and make sure that functional safety standards such as ISO26262 are fulfilled, the exploitation of potentials of model-driven systems engineering in the form of automatic analyses, consistency checks, and tracing mechanisms is indispensable. However, the language in which requirements are written, and the tools needed to operate on them, are highly individual and require domain-specific tailoring. This hinders automated processing of requirements as well as the linking of requirements to models. Introducing formal requirement notations in existing projects leads to the challenge of translating masses of requirements and process changes on the one hand and to the necessity of the corresponding training for the requirements engineers. In this paper, based on the analysis of an open-source set of automotive requirements, we derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality. The main contribution is the adoption and evaluation of few-shot learning with large pretrained LLMs for the automated translation of informal requirements to structured languages such as a requirement DSL. We show that support sets of less than ten translation examples can suffice to few-shot train a LLM to incorporate keywords and implement syntactic rules into informal natural language requirements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Vincent Bertram (1 paper)
  2. Miriam Boß (1 paper)
  3. Evgeny Kusmenko (2 papers)
  4. Imke Helene Nachmann (1 paper)
  5. Bernhard Rumpe (176 papers)
  6. Danilo Trotta (1 paper)
  7. Louis Wachtmeister (1 paper)
Citations (4)