Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Software Engineering Perspective on Testing Large Language Models: Research, Practice, Tools and Benchmarks (2406.08216v1)

Published 12 Jun 2024 in cs.SE

Abstract: LLMs are rapidly becoming ubiquitous both as stand-alone tools and as components of current and future software systems. To enable usage of LLMs in the high-stake or safety-critical systems of 2030, they need to undergo rigorous testing. Software Engineering (SE) research on testing Machine Learning (ML) components and ML-based systems has systematically explored many topics such as test input generation and robustness. We believe knowledge about tools, benchmarks, research and practitioner views related to LLM testing needs to be similarly organized. To this end, we present a taxonomy of LLM testing topics and conduct preliminary studies of state of the art and practice approaches to research, open-source tools and benchmarks for LLM testing, mapping results onto this taxonomy. Our goal is to identify gaps requiring more research and engineering effort and inspire a clearer communication between LLM practitioners and the SE research community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sinclair Hudson (2 papers)
  2. Sophia Jit (1 paper)
  3. Boyue Caroline Hu (5 papers)
  4. Marsha Chechik (19 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com