Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

InterEvo-TR: Interactive Evolutionary Test Generation With Readability Assessment (2401.07072v1)

Published 13 Jan 2024 in cs.SE and cs.AI

Abstract: Automated test case generation has proven to be useful to reduce the usually high expenses of software testing. However, several studies have also noted the skepticism of testers regarding the comprehension of generated test suites when compared to manually designed ones. This fact suggests that involving testers in the test generation process could be helpful to increase their acceptance of automatically-produced test suites. In this paper, we propose incorporating interactive readability assessments made by a tester into EvoSuite, a widely-known evolutionary test generation tool. Our approach, InterEvo-TR, interacts with the tester at different moments during the search and shows different test cases covering the same coverage target for their subjective evaluation. The design of such an interactive approach involves a schedule of interaction, a method to diversify the selected targets, a plan to save and handle the readability values, and some mechanisms to customize the level of engagement in the revision, among other aspects. To analyze the potential and practicability of our proposal, we conduct a controlled experiment in which 39 participants, including academics, professional developers, and student collaborators, interact with InterEvo-TR. Our results show that the strategy to select and present intermediate results is effective for the purpose of readability assessment. Furthermore, the participants' actions and responses to a questionnaire allowed us to analyze the aspects influencing test code readability and the benefits and limitations of an interactive approach in the context of test case generation, paving the way for future developments based on interactivity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. S. Anand, E. K. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold, P. McMinn, A. Bertolino, J. J. Li, and H. Zhu, “An orchestrated survey of methodologies for automated software test case generation,” Journal of Systems and Software, vol. 86, no. 8, pp. 1978–2001, 2013.
  2. S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege, “A systematic review of the application and empirical investigation of search-based test case generation,” IEEE Transactions on Software Engineering, vol. 36, no. 6, pp. 742–762, Nov 2010.
  3. S. Wappler and J. Wegener, “Evolutionary unit testing of object-oriented software using a hybrid evolutionary algorithm,” in IEEE International Conference on Evolutionary Computation, 2006, pp. 851–858.
  4. G. Fraser and A. Zeller, “Mutation-driven generation of unit tests and oracles,” IEEE Transactions on Software Engineering, vol. 38, no. 2, pp. 278–292, 2012.
  5. S. Panichella, A. Gambi, F. Zampetti, and V. Riccio, “Sbst tool competition 2021,” in 2021 IEEE/ACM 14th International Workshop on Search-Based Software Testing (SBST), 2021, pp. 20–27.
  6. B. Evers, P. Derakhshanfar, X. Devroey, and A. Zaidman, “Commonality-driven unit test generation,” in 12th International Symposium on Search-Based Software Engineering.   Springer International Publishing, 2020, pp. 121–136.
  7. A. Arcuri, J. P. Galeotti, B. Marculescu, and M. Zhang, “Evomaster: A search-based system test generation tool,” Journal of Open Source Software, vol. 6, no. 57, p. 2153, 2021.
  8. A. Arcuri, “An experience report on applying software testing academic results in industry: we need usable automated test generation,” Empirical Software Engineering, vol. 23, no. 4, pp. 1959–1981, 2018.
  9. J. M. Rojas and G. Fraser, “Is search-based unit test generation research stuck in a local optimum?” in IEEE/ACM 10th International Workshop on Search-Based Software Testing (SBST), 2017, pp. 51–52.
  10. M. M. Almasi, H. Hemmati, G. Fraser, A. Arcuri, and J. Benefelds, “An industrial evaluation of unit test generation: Finding real faults in a financial application,” in Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track.   IEEE Press, 2017, pp. 263–272.
  11. S. Shamshiri, J. M. Rojas, J. P. Galeotti, N. Walkinshaw, and G. Fraser, “How do automatically generated unit tests influence software maintenance?” in 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), 2018, pp. 250–261.
  12. E. Daka, J. M. Rojas, and G. Fraser, “Generating unit tests with descriptive names or: Would you name your children thing1 and thing2?” in Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2017.   New York, NY, USA: Association for Computing Machinery, 2017, p. 57–67.
  13. S. Panichella, A. Panichella, M. Beller, A. Zaidman, and H. C. Gall, “The impact of test case summaries on bug fixing performance: An empirical investigation,” in Proceedings of the 38th International Conference on Software Engineering, ser. ICSE ’16.   New York, NY, USA: Association for Computing Machinery, 2016, p. 547–558.
  14. D. Roy, Z. Zhang, M. Ma, V. Arnaoudova, A. Panichella, S. Panichella, D. Gonzalez, and M. Mirakhorli, “DeepTC-Enhancer: Improving the Readability of Automatically Generated Tests,” in 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2020, pp. 287–298.
  15. M. B. Cohen, “The maturation of search-based software testing: Successes and challenges,” in Proceedings of the 12th International Workshop on Search-Based Software Testing, ser. SBST ’19.   Piscataway, NJ, USA: IEEE Press, 2019, pp. 13–14.
  16. A. Ramírez, J. R. Romero, and C. L. Simons, “A systematic review of interaction in search-based software engineering,” IEEE Transactions on Software Engineering, vol. 45, no. 8, pp. 760–781, 2019.
  17. F. Pérez, J. Font, L. Arcega, and C. Cetina, “Empowering the Human as the Fitness Function in Search-Based Model-Driven Engineering,” IEEE Transactions on Software Engineering, pp. 1–16, 2022.
  18. S. Rebai, V. Alizadeh, M. Kessentini, H. Fehri, and R. Kazman, “Enabling Decision and Objective Space Exploration for Interactive Multi-Objective Refactoring,” IEEE Transactions on Software Engineering, pp. 1–19, 2022.
  19. B. Marculescu, R. Feldt, R. Torkar, and S. Poulding, “An initial industrial evaluation of interactive search-based testing for embedded software,” Applied Software Computing, vol. 29, pp. 26–39, 2015.
  20. B. Marculescu, R. Feldt, R. Torkar, and S. M. Poulding, “Transferring interactive search-based software testing to industry,” Journal of Systems and Software, vol. 142, pp. 156–170, 2018.
  21. B. Marculescu, R. Feldt, and R. Torkar, “A concept for an interactive search-based software testing system,” in Proceedings of the 4th International Conference on Search Based Software Engineering, ser. SSBSE’12.   Berlin, Heidelberg: Springer-Verlag, 2012, pp. 273–278.
  22. P. McMinn, “Search-based software test data generation: A survey: Research articles,” Software Testing, Verification and Reliability, vol. 14, no. 2, pp. 105–156, 2004.
  23. J. M. Rojas, M. Vivanti, A. Arcuri, and G. Fraser, “A detailed investigation of the effectiveness of whole test suite generation,” Empirical Software Engineering, vol. 22, no. 2, pp. 852–893, 2017.
  24. A. Panichella, F. M. Kifetew, and P. Tonella, “Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets,” IEEE Transactions on Software Engineering, vol. 44, no. 2, pp. 122–158, 2018.
  25. A. Ramírez, J. R. Romero, and S. Ventura, “Interactive multi-objective evolutionary optimization of software architectures,” Information Sciences, vol. 463-464, pp. 92–109, 2018.
  26. V. Alizadeh, M. Kessentini, M. W. Mkaouer, M. Ó Cinnéide, A. Ouni, and Y. Cai, “An Interactive and Dynamic Search-Based Approach to Software Refactoring Recommendations,” IEEE Transactions on Software Engineering, vol. 46, no. 9, pp. 932–961, 2020.
  27. B. Marculescu, S. Poulding, R. Feldt, K. Petersen, and R. Torkar, “Tester interactivity makes a difference in search-based software testing: A controlled experiment,” Information and Software Technology, vol. 78, pp. 66–82, 2016.
  28. A. Ramírez, P. Delgado-Pérez, K. J. Valle-Gómez, I. Medina-Bulo, and J. R. Romero, “Interactivity in the generation of test cases with evolutionary computation,” in 2021 IEEE Congress on Evolutionary Computation (CEC), 2021, pp. 2395–2402.
  29. M. Köppen and K. Yoshida, “Substitute distance assignments in NSGA-II for handling many-objective optimization problems,” in Evolutionary Multi-Criterion Optimization, S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 727–741.
  30. A. Arcuri and L. Briand, “A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering,” Software Testing, Verification and Reliability, vol. 24, no. 3, pp. 219–250, 2014.
  31. E. Daka, J. Campos, G. Fraser, J. Dorn, and W. Weimer, “Modeling readability to improve unit tests,” in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2015.   New York, NY, USA: ACM, 2015, pp. 107–118.
  32. E. Daka, J. Campos, J. Dorn, G. Fraser, and W. Weimer, “Generating readable unit tests for Guava,” in Search-Based Software Engineering - 7th International Symposium, SSBSE 2015, Bergamo, Italy, September 5-7, 2015, Proceedings, 2015, pp. 235–241.
  33. B. Marculescu, R. Feldt, and R. Torkar, “Objective re-weighting to guide an interactive search based software testing system,” in 2013 12th International Conference on Machine Learning and Applications, vol. 2, Dec 2013, pp. 102–107.
  34. B. Marculescu, R. Feldt, and R. Torkar, “Practitioner-oriented visualization in an interactive search-based software test creation tool,” in 2013 20th Asia-Pacific Software Engineering Conference (APSEC), vol. 2, Dec. 2013, pp. 87–92.
  35. R. Mohanani, I. Salman, B. Turhan, P. Rodríguez, and P. Ralph, “Cognitive Biases in Software Engineering: A Systematic Mapping Study,” IEEE Transactions on Software Engineering, vol. 46, no. 12, pp. 1318–1339, 2020.
Citations (7)

Summary

We haven't generated a summary for this paper yet.