Learning from Teaching Assistants to Program with Subgoals: Exploring the Potential for AI Teaching Assistants (2309.10419v1)
Abstract: With recent advances in generative AI, conversational models like ChatGPT have become feasible candidates for TAs. We investigate the practicality of using generative AI as TAs in introductory programming education by examining novice learners' interaction with TAs in a subgoal learning environment. To compare the learners' interaction and perception of the AI and human TAs, we conducted a between-subject study with 20 novice programming learners. Learners solve programming tasks by producing subgoals and subsolutions with the guidance of a TA. Our study shows that learners can solve tasks faster with comparable scores with AI TAs. Learners' perception of the AI TA is on par with that of human TAs in terms of speed and comprehensiveness of the replies and helpfulness, difficulty, and satisfaction of the conversation. Finally, we suggest guidelines to better design and utilize generative AI as TAs in programming education from the result of our chat log analysis.
- A systematic review of computational thinking approach for programming education in higher education institutions. In Proceedings of the 19th Koli Calling International Conference on Computing Education Research. 1–10.
- Toufique Ahmed and Premkumar Devanbu. 2022. Few-shot training LLMs for project-specific code-summarization. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1–5.
- Learning from examples: Instructional principles from the worked examples research. Review of educational research 70, 2 (2000), 181–214.
- Brett A. Becker and Thomas Fitzpatrick. 2019. What Do CS1 Syllabi Reveal About Our Expectations of Introductory Programming Students?. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE ’19). Association for Computing Machinery, New York, NY, USA, 1011–1017.
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Richard Catrambone. 1998. The subgoal learning model: Creating better examples so that students can solve novel problems. Journal of experimental psychology: General 127, 4 (1998), 355.
- Richard Catrambone and Keith Holyoak. 1990. Learning subgoals and methods for solving probability problems. Memory & cognition 18 (12 1990), 593–603. https://doi.org/10.3758/BF03197102
- Natgen: generative pre-training by “naturalizing” source code. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 18–30.
- GPTutor: a ChatGPT-powered programming tool for code explanation. arXiv preprint arXiv:2305.01863 (2023).
- Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
- AlgoSolve: Supporting Subgoal Learning in Algorithmic Problem-Solving with Learnersourced Microtasks. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 229, 16 pages. https://doi.org/10.1145/3491102.3501917
- Albert T. Corbett and John R. Anderson. 2001. Locus of Feedback Control in Computer-Based Tutoring: Impact on Learning Rate, Achievement and Attitudes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, USA) (CHI ’01). Association for Computing Machinery, New York, NY, USA, 245–252.
- The robots are coming: Exploring the implications of openai codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conference. 10–19.
- Misconception-Driven Feedback: Results from an Experimental Study. In Proceedings of the 2018 ACM Conference on International Computing Education Research (Espoo, Finland) (ICER ’18). Association for Computing Machinery, New York, NY, USA, 160–168.
- Exploring the Responses of Large Language Models to Beginner Programmers’ Help Requests. In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1. ACM. https://doi.org/10.1145/3568813.3600139
- SolveDeep: A system for supporting subgoal learning in online math problem solving. In Extended abstracts of the 2019 CHI conference on human factors in computing systems. 1–6.
- Martin Jonsson and Jakob Tholander. 2022. Cracking the Code: Co-Coding with AI in Creative Programming Education. In Proceedings of the 14th Conference on Creativity and Cognition (Venice, Italy) (C&C ’22). Association for Computing Machinery, New York, NY, USA, 5–14. https://doi.org/10.1145/3527927.3532801
- Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919
- Päivi Kinnunen and Lauri Malmi. 2006. Why Students Drop out CS1 Course? (ICER ’06). Association for Computing Machinery, New York, NY, USA, 97–108.
- M Konecki. 2014. Problems in programming education and means of their improvement. DAAAM international scientific book 2014 (2014), 459–470.
- Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664 (2021).
- Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 931–937. https://doi.org/10.1145/3545945.3569785
- Lauren E. Margulieux and Richard Catrambone. 2019. Finding the Best Types of Guidance for Constructing Self-Explanations of Subgoals in Programming. Journal of the Learning Sciences 28, 1 (2019), 108–151. https://doi.org/10.1080/10508406.2018.1491852 arXiv:https://doi.org/10.1080/10508406.2018.1491852
- Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications (ICER ’12). Association for Computing Machinery, New York, NY, USA, 71–78. https://doi.org/10.1145/2361276.2361291
- Adaptive Immediate Feedback Can Improve Novice Programming Engagement and Intention to Persist in Computer Science. In Proceedings of the 2020 ACM Conference on International Computing Education Research (Virtual Event, New Zealand) (ICER ’20). Association for Computing Machinery, New York, NY, USA, 194–203.
- A multi-national, multi-institutional study of assessment of programming skills of first-year CS students. In Working group reports from ITiCSE on Innovation and technology in computer science education. 125–180.
- Managing TAs at Scale: Investigating the Experiences of Teaching Assistants in Introductory Computer Science. In Proceedings of the Tenth ACM Conference on Learning @ Scale (Copenhagen, Denmark) (L@S ’23). Association for Computing Machinery, New York, NY, USA, 120–131. https://doi.org/10.1145/3573051.3593384
- Undergraduate Teaching Assistants in Computer Science: A Systematic Literature Review. In Proceedings of the 2019 ACM Conference on International Computing Education Research (Toronto ON, Canada) (ICER ’19). Association for Computing Machinery, New York, NY, USA, 31–40.
- Subgoals, context, and worked examples in learning computing problem solving. In Proceedings of the eleventh annual international conference on international computing education research. 21–29.
- Md Mostafizer Rahman and Yutaka Watanobe. 2023. ChatGPT for education and research: Opportunities, threats, and strategies. Applied Sciences 13, 9 (2023), 5783.
- Emma Riese and Viggo Kann. 2020. Teaching Assistants’ Experiences of Tutoring and Assessing in Computer Science Education. In 2020 IEEE Frontiers in Education Conference (FIE). 1–9. https://doi.org/10.1109/FIE44824.2020.9274245
- Challenges Faced by Teaching Assistants in Computer Science Education Across Europe. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1 (Virtual Event, Germany) (ITiCSE ’21). Association for Computing Machinery, New York, NY, USA, 547–553.
- Recipes for Building an Open-Domain Chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics, Online, 300–325. https://doi.org/10.18653/v1/2021.eacl-main.24
- Predictors of Success and Failure in a CS1 Course. SIGCSE Bull. 34, 4 (dec 2002), 121–124.
- Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1. ACM. https://doi.org/10.1145/3501385.3543957
- Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses. arXiv preprint arXiv:2306.10073 (2023).
- Nigar M Shafiq Surameery and Mohammed Y Shakor. 2023. Use chat gpt to solve programming bugs. International Journal of Information Technology & Computer Engineering (IJITC) ISSN: 2455-5290 3, 01 (2023), 17–22.
- Anaïs Tack and Chris Piech. 2022. The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues. In Proceedings of the 15th International Conference on Educational Data Mining, Antonija Mitrovic and Nigel Bosch (Eds.). International Educational Data Mining Society, Durham, United Kingdom, 522–529. https://doi.org/10.5281/zenodo.6853187
- LLaMA: Open and Efficient Foundation Language Models. arXiv:2302.13971
- Learnersourcing Subgoal Labels for How-to Videos. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (Vancouver, BC, Canada) (CSCW ’15). Association for Computing Machinery, New York, NY, USA, 405–416. https://doi.org/10.1145/2675133.2675219
- Jeannette M Wing. 2006. Computational thinking. Commun. ACM 49, 3 (2006), 33–35.
- Chunqiu Steven Xia and Lingming Zhang. 2023. Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT. arXiv preprint arXiv:2304.00385 (2023).
- Xiaoming Zhai. 2022. ChatGPT user experience: Implications for education. Available at SSRN 4312418 (2022).