Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards a Common Testing Terminology for Software Engineering and Data Science Experts (2108.13837v3)

Published 31 Aug 2021 in cs.SE, cs.AI, and cs.LG

Abstract: Analytical quality assurance, especially testing, is an integral part of software-intensive system development. With the increased usage of AI and Machine Learning (ML) as part of such systems, this becomes more difficult as well-understood software testing approaches cannot be applied directly to the AI-enabled parts of the system. The required adaptation of classical testing approaches and the development of new concepts for AI would benefit from a deeper understanding and exchange between AI and software engineering experts. We see the different terminologies used in the two communities as a major obstacle on this way. As we consider a mutual understanding of the testing terminology a key, this paper contributes a mapping between the most important concepts from classical software testing and AI testing. In the mapping, we highlight differences in the relevance and naming of the mapped concepts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Lisa Jöckel (8 papers)
  2. Thomas Bauer (30 papers)
  3. Michael Kläs (18 papers)
  4. Marc P. Hauer (5 papers)
  5. Janek Groß (6 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.