Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TESTEVAL: Benchmarking Large Language Models for Test Case Generation (2406.04531v1)

Published 6 Jun 2024 in cs.SE

Abstract: Testing plays a crucial role in the software development cycle, enabling the detection of bugs, vulnerabilities, and other undesirable behaviors. To perform software testing, testers need to write code snippets that execute the program under test. Recently, researchers have recognized the potential of LLMs in software testing. However, there remains a lack of fair comparisons between different LLMs in terms of test case generation capabilities. In this paper, we propose TESTEVAL, a novel benchmark for test case generation with LLMs. We collect 210 Python programs from an online programming platform, LeetCode, and design three different tasks: overall coverage, targeted line/branch coverage, and targeted path coverage. We further evaluate sixteen popular LLMs, including both commercial and open-source ones, on TESTEVAL. We find that generating test cases to cover specific program lines/branches/paths is still challenging for current LLMs, indicating a lack of ability to comprehend program logic and execution paths. We have open-sourced our dataset and benchmark pipelines at https://LLM4softwaretesting.github.io to contribute and accelerate future research on LLMs for software testing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Wenhan Wang (22 papers)
  2. Chenyuan Yang (12 papers)
  3. Zhijie Wang (36 papers)
  4. Yuheng Huang (26 papers)
  5. Zhaoyang Chu (7 papers)
  6. Da Song (10 papers)
  7. Lingming Zhang (48 papers)
  8. An Ran Chen (9 papers)
  9. Lei Ma (195 papers)
Github Logo Streamline Icon: https://streamlinehq.com