Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Automating Personalized Parsons Problems with Customized Contexts and Concepts (2404.10990v1)

Published 17 Apr 2024 in cs.CY

Abstract: Parsons problems provide useful scaffolding for introductory programming students learning to write code. However, generating large numbers of high-quality Parsons problems that appeal to the diverse range of interests in a typical introductory course is a significant challenge for educators. LLMs may offer a solution, by allowing students to produce on-demand Parsons problems for topics covering the breadth of the introductory programming curriculum, and targeting thematic contexts that align with their personal interests. In this paper, we introduce PuzzleMakerPy, an educational tool that uses an LLM to generate unlimited contextualized drag-and-drop programming exercises in the form of Parsons Problems, which introductory programmers can use as a supplemental learning resource. We evaluated PuzzleMakerPy by deploying it in a large introductory programming course, and found that the ability to personalize the contextual framing used in problem descriptions was highly engaging for students, and being able to customize the programming topics was reported as being useful for their learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. When Life and Learning Do Not Fit: Challenges of Workload and Communication in Introductory Computer Science Online. ACM Transactions on Computing Education 12, 4 (Nov. 2012), 15:1–15:38. https://doi.org/10.1145/2382564.2382567
  2. Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
  3. CodeT: Code Generation with Generated Tests. https://doi.org/10.48550/arXiv.2207.10397
  4. Evaluating Automatically Generated Contextualised Programming Exercises. In Proc of the 55th ACM Tech. Sym. on Comp. Sci. Education V. 1 (SIGCSE 2024). ACM, New York, NY, USA, 289–295. https://doi.org/10.1145/3626252.3630863
  5. Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In Proc of the 54th ACM Tech. Sym. on Comp. Sci. Education V. 1 (SIGCSE 2023). ACM, NY, USA, 1136–1142. https://doi.org/10.1145/3545945.3569823
  6. Prompt Problems: A New Programming Exercise for the Generative AI Era. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2024). ACM, NY, USA, 296–302. https://doi.org/10.1145/3626252.3630909
  7. Computing Education in the Era of Generative AI. Commun. ACM 67, 2 (Jan 2024), 56–67. https://doi.org/10.1145/3624720
  8. Explaining Code with a Purpose: An Integrated Approach for Developing Code Comprehension and Prompting Skills. https://doi.org/10.48550/arXiv.2403.06050
  9. A Comparative Study of AI-Generated (GPT-4) and Human-crafted MCQs in Programming Education. In Proc of the 26th Australasian Comp. Ed. Conf. (ACE ’24). ACM, NY, USA, 114–123. https://doi.org/10.1145/3636243.3636256
  10. Parsons Problems and Beyond: Systematic Literature Review and Empirical Study Designs. In Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education. ACM, Dublin Ireland, 191–234. https://doi.org/10.1145/3571785.3574127
  11. Evaluating the Efficiency and Effectiveness of Adaptive Parsons Problems. In Proceedings of the 2018 ACM Conference on International Computing Education Research (ICER ’18). ACM, New York, NY, USA, 60–68. https://doi.org/10.1145/3230977.3231000
  12. Analysis of Interactive Features Designed to Enhance Learning in an Ebook. In Proceedings of the eleventh annual International Conf. on Int. Computing Education Research. ACM, Omaha Nebraska USA, 169–178. https://doi.org/10.1145/2787622.2787731
  13. Solving parsons problems versus fixing and writing code. In Proceedings of the 17th Koli Calling International Conference on Computing Education Research. ACM, Koli Finland, 20–29. https://doi.org/10.1145/3141880.3141895
  14. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proc of the 24th Australasian Comp Ed Conference (ACE ’22). ACM, NY, USA, 10–19. https://doi.org/10.1145/3511861.3511863
  15. Exploring the Difficulty of Faded Parsons Problems for Programming Education. In Proc of the 25th Australasian Computing Education Conference (ACE ’23). ACM, NY, USA, 113–122. https://doi.org/10.1145/3576123.3576136
  16. Distractors in Parsons Problems Decrease Learning Efficiency for Young Novice Programmers. In Proc of the 2016 ACM Conference on International Computing Education Research (ICER ’16). ACM, NY, USA, 241–250. https://doi.org/10.1145/2960310.2960314
  17. More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems. In Proceedings of the 26th Australasian Computing Education Conference (ACE ’24). ACM, New York, NY, USA, 29–38. https://doi.org/10.1145/3636243.3636247
  18. Using Adaptive Parsons Problems to Scaffold Write-Code Problems. In Proceedings of the 2022 ACM Conference on International Computing Education Research V.1. ACM, Lugano and Virtual Event Switzerland, 15–26. https://doi.org/10.1145/3501385.3543977
  19. CodeTailor: Personalized Parsons Puzzles are Preferred Over AI-Generated Solutions to Support Learning. https://doi.org/10.48550/arXiv.2401.12125
  20. Evaluating LLM-generated Worked Examples in an Introductory Programming Course. In Proc of the 26th Australasian Comp Ed Conference (ACE ’24). ACM, NY, USA, 77–86. https://doi.org/10.1145/3636243.3636252
  21. Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). ACM, New York, NY, USA, Article 455, 23 pages. https://doi.org/10.1145/3544548.3580919
  22. Comparing Code Explanations Created by Students and Large Language Models. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (ITiCSE 2023). ACM, New York, NY, USA, 124–130. https://doi.org/10.1145/3587102.3588785
  23. Exploring the Effects of Contextualized Problem Descriptions on Problem Solving. In Proc of the 23rd Australasian Computing Education Conference (ACE ’21). ACM, New York, NY, USA, 30–39. https://doi.org/10.1145/3441636.3442302
  24. Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1. ACM, Toronto ON Canada, 563–569. https://doi.org/10.1145/3545945.3569770
  25. CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes. In Proc of the 23rd Koli Calling Int. Conf. on Computing Education Research (Koli Calling ’23). ACM, NY, USA, Article 8, 11 pages. https://doi.org/10.1145/3631802.3631830
  26. Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models. In Proceedings of the 26th Australasian Computing Education Conference (ACE ’24). ACM, New York, NY, USA, 11–18. https://doi.org/10.1145/3636243.3636245
  27. What Is Your Biggest Pain Point? An Investigation of CS Instructor Obstacles, Workarounds, and Desires. In Proc of the 54th ACM Tech. Sym. on Comp. Sci. Education V. 1 (SIGCSE 2023). ACM, NY, USA, 291–297. https://doi.org/10.1145/3545945.3569816
  28. Dale Parsons and Patricia Haden. 2006. Parson’s programming puzzles: a fun and effective learning tool for first programming courses. In Proc 8th Australasian Conf. on Comp. Ed. - Vol. 52 (ACE ’06). Australian Comp. Soc., AUS, 157–163.
  29. Generating High-Precision Feedback for Programming Syntax Errors using Large Language Models. arXiv preprint arXiv:2302.04662 (2023). https://arxiv.org/abs/2302.04662
  30. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proc of the 2023 Working Group Reports on Innovation and Technology in CS Education (ITiCSE-WGR ’23). ACM, New York, NY, USA, 108–159. https://doi.org/10.1145/3623762.3633499
  31. Yizhou Qian and James Lehman. 2018. Students’ Misconceptions and Other Difficulties in Introductory Programming: A Literature Review. ACM Transactions on Computing Education 18, 1 (March 2018), 1–24. https://doi.org/10.1145/3077618
  32. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proc of the 2022 ACM Conf. on Int. Comp. Ed. Research - Volume 1 (ICER ’22), Vol. 1. ACM, NY, USA, 27–43. https://doi.org/10.1145/3501385.3543957
  33. Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (ITiCSE 2023). ACM, New York, NY, USA, 117–123. https://doi.org/10.1145/3587102.3588792
  34. Improving Instruction of Programming Patterns with Faded Parsons Problems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–4. https://doi.org/10.1145/3411764.3445228
  35. Evaluating the Effectiveness of Parsons Problems for Block-based Programming. In Proceedings of the 2019 ACM Conference on International Computing Education Research. ACM, Toronto ON Canada, 51–59. https://doi.org/10.1145/3291279.3339419
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andre del Carpio Gutierrez (1 paper)
  2. Paul Denny (67 papers)
  3. Andrew Luxton-Reilly (16 papers)
Citations (2)