Unit Testing Challenges with Automated Marking
Abstract: Teaching software testing presents difficulties due to its abstract and conceptual nature. The lack of tangible outcomes and limited emphasis on hands-on experience further compound the challenge, often leading to difficulties in comprehension for students. This can result in waning engagement and diminishing motivation over time. In this paper, we introduce online unit testing challenges with automated marking as a learning tool via the EdStem platform to enhance students' software testing skills and understanding of software testing concepts. Then, we conducted a survey to investigate the impact of the unit testing challenges with automated marking on student learning. The results from 92 participants showed that our unit testing challenges have kept students more engaged and motivated, fostering deeper understanding and learning, while the automated marking mechanism enhanced students' learning progress, helping them to understand their mistakes and misconceptions quicker than traditional-style human-written manual feedback. Consequently, these results inform educators that the online unit testing challenges with automated marking improve overall student learning experience, and are an effective pedagogical practice in software testing.
- M. Aniche, F. Hermans, and A. Van Deursen, “Pragmatic software testing education,” in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 414–420.
- B. Cheang, A. Kurnia, A. Lim, and W.-C. Oon, “On automated grading of programming assignments in an academic institution,” Computers & Education, vol. 41, no. 2, pp. 121–131, 2003.
- N. Falkner, R. Vivian, D. Piper, and K. Falkner, “Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units,” in Proceedings of the 45th ACM technical symposium on Computer science education, 2014, pp. 9–14.
- V. Garousi, A. Rainer, P. Lauvås Jr, and A. Arcuri, “Software-testing education: A systematic literature mapping,” Journal of Systems and Software, vol. 165, p. 110570, 2020.
- C. Iddon, N. Giacaman, and V. Terragni, “Gradestyle: Github-integrated and automated assessment of java code style,” in 2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET). IEEE, 2023, pp. 192–197.
- O. A. L. Lemos, F. F. Silveira, F. C. Ferrari, and A. Garcia, “The impact of software testing education on code reliability: An empirical assessment,” Journal of Systems and Software, vol. 137, pp. 497–511, 2018.
- J. C. Paiva, J. P. Leal, and Á. Figueira, “Automated assessment in computer science education: A state-of-the-art review,” ACM Transactions on Computing Education (TOCE), vol. 22, no. 3, pp. 1–40, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.