Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unit Testing Challenges with Automated Marking (2310.06308v1)

Published 10 Oct 2023 in cs.SE

Abstract: Teaching software testing presents difficulties due to its abstract and conceptual nature. The lack of tangible outcomes and limited emphasis on hands-on experience further compound the challenge, often leading to difficulties in comprehension for students. This can result in waning engagement and diminishing motivation over time. In this paper, we introduce online unit testing challenges with automated marking as a learning tool via the EdStem platform to enhance students' software testing skills and understanding of software testing concepts. Then, we conducted a survey to investigate the impact of the unit testing challenges with automated marking on student learning. The results from 92 participants showed that our unit testing challenges have kept students more engaged and motivated, fostering deeper understanding and learning, while the automated marking mechanism enhanced students' learning progress, helping them to understand their mistakes and misconceptions quicker than traditional-style human-written manual feedback. Consequently, these results inform educators that the online unit testing challenges with automated marking improve overall student learning experience, and are an effective pedagogical practice in software testing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. M. Aniche, F. Hermans, and A. Van Deursen, “Pragmatic software testing education,” in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 2019, pp. 414–420.
  2. B. Cheang, A. Kurnia, A. Lim, and W.-C. Oon, “On automated grading of programming assignments in an academic institution,” Computers & Education, vol. 41, no. 2, pp. 121–131, 2003.
  3. N. Falkner, R. Vivian, D. Piper, and K. Falkner, “Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units,” in Proceedings of the 45th ACM technical symposium on Computer science education, 2014, pp. 9–14.
  4. V. Garousi, A. Rainer, P. Lauvås Jr, and A. Arcuri, “Software-testing education: A systematic literature mapping,” Journal of Systems and Software, vol. 165, p. 110570, 2020.
  5. C. Iddon, N. Giacaman, and V. Terragni, “Gradestyle: Github-integrated and automated assessment of java code style,” in 2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET).   IEEE, 2023, pp. 192–197.
  6. O. A. L. Lemos, F. F. Silveira, F. C. Ferrari, and A. Garcia, “The impact of software testing education on code reliability: An empirical assessment,” Journal of Systems and Software, vol. 137, pp. 497–511, 2018.
  7. J. C. Paiva, J. P. Leal, and Á. Figueira, “Automated assessment in computer science education: A state-of-the-art review,” ACM Transactions on Computing Education (TOCE), vol. 22, no. 3, pp. 1–40, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Chakkrit Tantithamthavorn (49 papers)
  2. Norman Chen (2 papers)