Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Automated Program Repair Meets Regression Testing -- An Extensive Study on 2 Million Patches (2105.07311v2)

Published 15 May 2021 in cs.SE

Abstract: In recent years, Automated Program Repair (APR) has been extensively studied in academia and even drawn wide attention from industry. However, APR techniques can be extremely time consuming since (1) a large number of patches can be generated for a given bug, and (2) each patch needs to be executed on the original tests to ensure its correctness. In the literature, various techniques (e.g., based on learning, mining, and constraint solving) have been proposed/studied to reduce the number of patches. Intuitively, every patch can be treated as a software revision during regression testing; thus, traditional Regression Test Selection (RTS) techniques can be leveraged to only execute the tests affected by each patch (as the other tests would keep the same outcomes) to further reduce patch execution time. However, few APR systems actually adopt RTS and there is still a lack of systematic studies demonstrating the benefits of RTS and the impact of different RTS strategies on APR. To this end, this paper presents the first extensive study of widely-used RTS techniques at different levels (i.e., class/method/statement levels) for 12 state-of-the-art APR systems on over 2M patches. Our study reveals various practical guidelines for bridging the gap between APR and regression testing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiling Lou (28 papers)
  2. Samuel Benton (2 papers)
  3. Dan Hao (14 papers)
  4. Lu Zhang (373 papers)
  5. Lingming Zhang (49 papers)
  6. Jun Yang (358 papers)
  7. Lin Tan (25 papers)
  8. Zhenpeng Chen (39 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.