Fine-Grained Assertion-Based Test Selection (2403.16001v1)
Abstract: For large software applications, running the whole test suite after each code change is time- and resource-intensive. Regression test selection techniques aim at reducing test execution time by selecting only the tests that are affected by code changes. However, existing techniques select test entities at coarse granularity levels such as test class, which causes imprecise test selection and executing unaffected tests. We propose a novel approach that increases the selection precision by analyzing test code at statement level and treating test assertions as the unit for selection. We implement our fine-grained test selection approach in a tool called SELERTION and evaluate it by comparing against two state-of-the-art test selection techniques using 11 open-source subjects. Our results show that SELERTION increases selection precision for all the subjects. Our test selection reduces, on average, 63% of the overall test time, making regression testing up to 23% faster than the other techniques. Our results also indicate that subjects with longer test execution time benefit more by our fine-grained selection technique.
- E. Engström and P. Runeson, “A qualitative survey of regression testing practices,” in International Conference on Product Focused Software Process Improvement. Springer, 2010, pp. 3–16.
- S. Elbaum, G. Rothermel, and J. Penix, “Techniques for improving regression testing in continuous integration development environments,” in Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 2014, pp. 235–245.
- A. Memon, Z. Gao, B. Nguyen, S. Dhanda, E. Nickell, R. Siemborski, and J. Micco, “Taming google-scale continuous testing,” in 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP). IEEE, 2017, pp. 233–242.
- S. Elbaum, A. G. Malishevsky, and G. Rothermel, “Prioritizing test cases for regression testing,” in Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis. ACM, 2000, pp. 102–112.
- M. Gligoric, S. Negara, O. Legunsen, and D. Marinov, “An empirical evaluation and comparison of manual and automated test selection,” in Proceedings of the 29th ACM/IEEE international conference on Automated software engineering. ACM, 2014, pp. 361–372.
- A. Kiran, W. H. Butt, M. W. Anwar, F. Azam, and B. Maqbool, “A comprehensive investigation of modern test suite optimization trends, tools and techniques,” IEEE Access, vol. 7, pp. 89 093–89 117, 2019.
- S. Yoo and M. Harman, “Regression testing minimization, selection and prioritization: a survey,” Software testing, verification and reliability, vol. 22, no. 2, pp. 67–120, 2012.
- R. Kazmi, D. N. Jawawi, R. Mohamad, and I. Ghani, “Effective regression test case selection: A systematic literature review,” ACM Computing Surveys (CSUR), vol. 50, no. 2, pp. 1–32, 2017.
- M. Gligoric, L. Eloussi, and D. Marinov, “Practical regression test selection with dynamic file dependencies,” in Proceedings of the 2015 International Symposium on Software Testing and Analysis (ISSTA ’15). ACM, 2015, pp. 211–222.
- X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley, “Chianti: a tool for change impact analysis of java programs,” in Proceedings of the 19th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications (OOPSLA04). ACM, 2004, pp. 432–448.
- L. Zhang, M. Kim, and S. Khurshid, “Localizing failure-inducing program edits based on spectrum information,” in Proceedings of the 27th IEEE International Conference on Software Maintenance (ICSM ’11). IEEE Computer Society, 2011, pp. 23–32.
- L. Zhang, “Hybrid regression test selection,” in Proceedings of the 40th International Conference on Software Engineering (ICSE ’18). IEEE, 2018, pp. 199–209.
- Math, “The apache commons mathematics library,” https://github.com/apache/commons-math/tree/master-old, 2016, accessed: 2022-11-10.
- Y. Zhang and A. Mesbah, “Assertions are strongly correlated with test suite effectiveness,” in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’15). ACM, 2015, pp. 214–224.
- M. Weiser, “Program slicing,” IEEE Transactions on software engineering, vol. SE-10, no. 4, pp. 352–357, 1984.
- D. P. Mohapatra, R. Mall, and R. Kumar, “An overview of slicing techniques for object-oriented programs,” Informatica, vol. 30, no. 2, pp. 253–277, 2006.
- C. Hammer and G. Snelting, “An improved slicer for java,” in Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering. ACM, 2004, pp. 17–22.
- Z. Chen and B. Xu, “Slicing object-oriented java programs,” ACM Sigplan Notices, vol. 36, no. 4, pp. 33–40, 2001.
- JDT, “Eclipse java development tools,” https://www.eclipse.org/jdt/, 2022, accessed: 2022-11-10.
- AssertJ, “fluent assertions java library,” https://assertj.github.io/doc/, 2022, accessed: 2022-11-10.
- G. Truth, “Fluent assertions for java and android,” https://truth.dev/, 2022, accessed: 2022-11-10.
- Selertion, “Fine-grained test selection at assertion-level,” https://anonymous.4open.science/r/Selertion, 2022.
- L. E. Milos Gligoric and D. Marinov, “Ekstazi: Lightweight test selection,” http://www.ekstazi.org/maven.html, 2014, accessed: 2023-10-24.
- Asterisk-Java, “The free java library for asterisk pbx integration,” https://github.com/asterisk-java/asterisk-java, 2020, accessed: 2022-11-10.
- Net, “Apache commons net,” https://commons.apache.org/proper/commons-net/, 2022, accessed: 2023-08-01.
- A. C. Exec, “Executing external processes from java,” https://commons.apache.org/proper/commons-exec/, 2014, accessed: 2023-08-01.
- tabula java, “Extract tables from pdf files,” https://github.com/tabulapdf/tabula-java, 2022, accessed: 2023-01-30.
- OpenTripPlanner, “An open source multi-modal trip planner,” www.opentripplanner.org, 2021, accessed: 2022-11-10.
- Stream-lib, “Stream summarizer and cardinality estimator,” https://github.com/addthis/stream-lib/tree/v2.9.8, 2019, accessed: 2022-11-10.
- A. Tika, “a content analysis toolkit,” https://tika.apache.org/, 2020, accessed: 2022-11-10.
- A. Accumulo, “A sorted, distributed key/value store that provides robust, scalable data storage and retrieval,” https://accumulo.apache.org/, 2019, accessed: 2022-11-10.
- A. C. Pool, “The apache commons object pooling library,” https://github.com/apache/commons-pool, 2019, accessed: 2022-11-10.
- LogicNG, “The next generation logic library,” https://github.com/logic-ng/LogicNG, 2019, accessed: 2023-01-31.
- A. Vahabzadeh, A. Stocco, and A. Mesbah, “Fine-grained test minimization,” in Proceedings of the 40th International Conference on Software Engineering (ICSE ’18). IEEE, 2018, pp. 210–221.
- JUnit5, “The 5th major version of the programmer-friendly testing framework for java and the jvm,” https://junit.org/junit5/, 2022, accessed: 2022-11-10.
- T. M. M. Framework, “Easy and scalable mutation analysis for java,” https://mutation-testing.org/, 2022, accessed: 2023-08-01.
- Cloc, “cloc counts blank lines, comment lines, and physical lines of source code in many programming languages,” https://github.com/AlDanial/cloc, 2022, accessed: 2022-11-10.
- Surefire, “Maven surefire plugin,” https://maven.apache.org/surefire/maven-surefire-plugin/, 2022, accessed: 2022-11-10.
- M. J. Harrold, J. A. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. A. Spoon, and A. Gujarathi, “Regression test selection for java software,” ACM Sigplan Notices, vol. 36, no. 11, pp. 312–326, 2001.
- O. Legunsen, F. Hariri, A. Shi, Y. Lu, L. Zhang, and D. Marinov, “An extensive study of static regression test selection in modern software evolution,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’16). ACM, 2016, pp. 583–594.
- A. Orso, N. Shi, and M. J. Harrold, “Scaling regression testing to large software systems,” ACM SIGSOFT Software Engineering Notes, vol. 29, no. 6, pp. 241–251, 2004.
- E. Engström, M. Skoglund, and P. Runeson, “Empirical evaluations of regression test selection techniques: a systematic review,” in Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM ’08). ACM, 2008, pp. 22–31.
- E. Engström, P. Runeson, and M. Skoglund, “A systematic review on regression test selection techniques,” Information and Software Technology, vol. 52, no. 1, pp. 14–30, 2010.
- S. Biswas, R. Mall, M. Satpathy, and S. Sukumaran, “Regression test selection techniques: A survey,” Informatica, vol. 35, no. 3, pp. 289–321, 2011.
- G. Rothermel and M. J. Harrold, “A safe, efficient regression test selection technique,” ACM Transactions on Software Engineering and Methodology (TOSEM), vol. 6, no. 2, pp. 173–210, 1997.
- M. Vasic, Z. Parvez, A. Milicevic, and M. Gligoric, “File-level vs. module-level regression test selection for. net,” in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’17). ACM, 2017, pp. 848–853.
- A. Celik, M. Vasic, A. Milicevic, and M. Gligoric, “Regression test selection across jvm boundaries,” in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’17). ACM, 2017, pp. 809–820.
- O. Legunsen, A. Shi, and D. Marinov, “Starts: Static regression test selection,” in Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE ’17). IEEE Press, 2017, pp. 949–954.
- A. Shi, M. Hadzi-Tanovic, L. Zhang, D. Marinov, and O. Legunsen, “Reflection-aware static regression test selection,” Proceedings of the ACM on Programming Languages, vol. 3, no. OOPSLA, pp. 1–29, 2019.
- D. Schuler and A. Zeller, “Assessing oracle quality with checked coverage,” in Proceedings of the 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation (ICST ’11). IEEE Computer Society, 2011, pp. 90–99.
- J. Chen, Y. Bai, D. Hao, L. Zhang, L. Zhang, and B. Xie, “How do assertions impact coverage-based test-suite reduction?” in Proceedings of the 10th IEEE International Conference on Software Testing, Verification and Validation (ICST ’17). IEEE, 2017, pp. 418–423.
- Z. F. Fang and P. Lam, “Identifying test refactoring candidates with assertion fingerprints,” in Proceedings of the Principles and Practices of Programming on The Java Platform (PPPJ ’15). ACM, 2015, pp. 125–137.
- J. Xuan, B. Cornu, M. Martinez, B. Baudry, L. Seinturier, and M. Monperrus, “B-refactoring: Automatic test code refactoring to improve dynamic analysis,” Information and Software Technology, vol. 76, pp. 65–80, 2016.
- J. Xuan and M. Monperrus, “Test case purification for improving fault localization,” in Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT/FSE’14). ACM, 2014, pp. 52–63.
- Urban, “Urban airship java library,” https://github.com/urbanairship/java-library, 2020, accessed: 2022-11-10.
- Lang, “Apache commons lang,” https://github.com/apache/commons-lang, 2018, accessed: 2022-11-10.
- A. Santuario, “Xml security for java,” https://github.com/apache/santuario-xml-security-java, 2019, accessed: 2022-11-10.
- A. Vahabzadeh, A. M. Fard, and A. Mesbah, “An empirical study of bugs in test code,” in 2015 IEEE international conference on software maintenance and evolution (ICSME). IEEE, 2015, pp. 101–110.
- Sijia Gu (3 papers)
- Ali Mesbah (45 papers)