Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Opening the Software Engineering Toolbox for the Assessment of Trustworthy AI (2007.07768v2)

Published 14 Jul 2020 in cs.SE, cs.AI, and cs.CY

Abstract: Trustworthiness is a central requirement for the acceptance and success of human-centered AI. To deem an AI system as trustworthy, it is crucial to assess its behaviour and characteristics against a gold standard of Trustworthy AI, consisting of guidelines, requirements, or only expectations. While AI systems are highly complex, their implementations are still based on software. The software engineering community has a long-established toolbox for the assessment of software systems, especially in the context of software testing. In this paper, we argue for the application of software engineering and testing practices for the assessment of trustworthy AI. We make the connection between the seven key requirements as defined by the European Commission's AI high-level expert group and established procedures from software engineering and raise questions for future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Mohit Kumar Ahuja (2 papers)
  2. Mohamed-Bachir Belaid (5 papers)
  3. Pierre Bernabé (2 papers)
  4. Mathieu Collet (1 paper)
  5. Arnaud Gotlieb (30 papers)
  6. Chhagan Lal (7 papers)
  7. Dusica Marijan (18 papers)
  8. Sagar Sen (7 papers)
  9. Aizaz Sharif (5 papers)
  10. Helge Spieker (27 papers)
Citations (7)