Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can We Trust AI-Generated Educational Content? Comparative Analysis of Human and AI-Generated Learning Resources (2306.10509v2)

Published 18 Jun 2023 in cs.HC and cs.AI

Abstract: As an increasing number of students move to online learning platforms that deliver personalized learning experiences, there is a great need for the production of high-quality educational content. LLMs appear to offer a promising solution to the rapid creation of learning materials at scale, reducing the burden on instructors. In this study, we investigated the potential for LLMs to produce learning resources in an introductory programming context, by comparing the quality of the resources generated by an LLM with those created by students as part of a learnersourcing activity. Using a blind evaluation, students rated the correctness and helpfulness of resources generated by AI and their peers, after both were initially provided with identical exemplars. Our results show that the quality of AI-generated resources, as perceived by students, is equivalent to the quality of resources generated by their peers. This suggests that AI-generated resources may serve as viable supplementary material in certain contexts. Resources generated by LLMs tend to closely mirror the given exemplars, whereas student-generated resources exhibit greater variety in terms of content length and specific syntax features used. The study highlights the need for further research exploring different types of learning resources and a broader range of subject areas, and understanding the long-term impact of AI-generated resources on learning outcomes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Personal learning environments based on web 2.0 services in higher education. Telematics and Informatics, 38:194–206, 2019. ISSN 0736-5853. doi:https://doi.org/10.1016/j.tele.2018.10.003. URL https://www.sciencedirect.com/science/article/pii/S0736585318306312.
  2. Juho Kim. Learnersourcing: improving learning with collective learner activity. PhD thesis, Massachusetts Institute of Technology, 2015.
  3. Lessons learned from four computing education crowdsourcing systems. IEEE Access, 11:22982–22992, 2023. doi:10.1109/ACCESS.2023.3253642.
  4. Automatic generation of programming exercises and code explanations using large language models. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1, ICER ’22, page 27–43, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450391948. doi:10.1145/3501385.3543957. URL https://doi.org/10.1145/3501385.3543957.
  5. Comparing code explanations created by students and large language models. In Proceedings of the 28th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE ’23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701382. doi:10.1145/3587102.3588785. URL https://doi.org/10.1145/3587102.3588785.
  6. Towards human-like educational question generation with large language models. In Artificial Intelligence in Education: 23rd International Conference, AIED 2022, Durham, UK, July 27–31, 2022, Proceedings, Part I, pages 153–166. Springer, 2022.
  7. Scalable educational question generation with pre-trained language models, 2023.
  8. Emilio Ferrara. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738, 2023.
  9. Mike Perkins. Academic integrity considerations of ai large language models in the post-pandemic era: Chatgpt and beyond. Journal of University Teaching & Learning Practice, 20(2):07, 2023.
  10. Educational research and ai-generated writing: Confronting the coming tsunami, 2023.
  11. Practical and ethical challenges of large language models in education: A systematic literature review. arXiv preprint arXiv:2303.13379, 2023.
  12. Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4:100118, 2023. ISSN 2666-920X. doi:https://doi.org/10.1016/j.caeai.2022.100118. URL https://www.sciencedirect.com/science/article/pii/S2666920X2200073X.
  13. Programming is hard - or at least it used to be: Educational opportunities and challenges of ai code generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023, page 500–506, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394314. doi:10.1145/3545945.3569759. URL https://doi.org/10.1145/3545945.3569759.
  14. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274, 2023. ISSN 1041-6080. doi:https://doi.org/10.1016/j.lindif.2023.102274. URL https://www.sciencedirect.com/science/article/pii/S1041608023000195.
  15. Computing education in the era of generative ai. arXiv preprint, 2023a. doi:10.48550/arXiv.2306.02608.
  16. Programming pedagogy and assessment in the era of ai/ml: A position paper. In Proceedings of the 15th Annual ACM India Compute Conference, COMPUTE ’22, page 29–34, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450397759. doi:10.1145/3561833.3561843. URL https://doi.org/10.1145/3561833.3561843.
  17. Michel Wermelinger. Using github copilot to solve simple programming problems. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023, page 172–178, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394314. doi:10.1145/3545945.3569830. URL https://doi.org/10.1145/3545945.3569830.
  18. Can generative pre-trained transformers (gpt) pass assessments in higher education programming courses? arXiv preprint, 2023. doi:10.48550/arXiv.2303.09325.
  19. The robots are coming: Exploring the implications of openai codex on introductory programming. In Proceedings of the 24th Australasian Computing Education Conference, ACE ’22, page 10–19, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450396431. doi:10.1145/3511861.3511863. URL https://doi.org/10.1145/3511861.3511863.
  20. My ai wants to know if this will be on the exam: Testing openai’s codex on cs2 programming exercises. In Proceedings of the 25th Australasian Computing Education Conference, ACE ’23, page 97–104, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450399418. doi:10.1145/3576123.3576134. URL https://doi.org/10.1145/3576123.3576134.
  21. Conversing with copilot: Exploring prompt engineering for solving cs1 problems using natural language. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education - Volume 1, SIGCSE 2023, page 1–7, New York, NY, USA, 2023b. Association for Computing Machinery. ISBN 9781450394314. doi:10.1145/3545945.3569823. URL https://doi.org/10.1145/3545945.3569823.
  22. Many bioinformatics programming tasks can be automated with chatgpt. arXiv preprint, 2023. doi:10.48550/arXiv.2303.13528.
  23. Experiences from using code explanations generated by large language models in a web software development e-book. In Proc. SIGCSE’23. ACM, 2023.
  24. Measuring the effect of inventing practice exercises on learning in an introductory programming course. In Proceedings of the 15th Koli Calling Conference on Computing Education Research, Koli Calling ’15, page 13–22, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450340205. doi:10.1145/2828959.2828967. URL https://doi.org/10.1145/2828959.2828967.
  25. Charting the design and analytics agenda of learnersourcing systems. In LAK21: 11th International Learning Analytics and Knowledge Conference, LAK21, page 32–42, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450389358. doi:10.1145/3448139.3448143. URL https://doi.org/10.1145/3448139.3448143.
  26. Learnersourcing in theory and practice: Synthesizing the literature and charting the future. In Proceedings of the Ninth ACM Conference on Learning @ Scale, L@S ’22, page 234–245, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450391580. doi:10.1145/3491140.3528277. URL https://doi.org/10.1145/3491140.3528277.
  27. Peerwise: students sharing their multiple choice questions. In Proceedings of the fourth international workshop on computing education research, pages 51–58, 2008.
  28. Peerwise: Evaluating the effectiveness of a web-based learning aid in a second-year psychology subject. Psychology Learning & Teaching, 17(2):166–176, 2018.
  29. Peerwise: replication study of a student-collaborative self-testing web service in a us setting. In Proceedings of the 41st ACM technical symposium on Computer science education, pages 421–425, 2010.
  30. Generation and retrieval practice effects in the classroom using peerwise. Teaching of Psychology, 46(2):121–126, 2019. doi:10.1177/0098628319834174. URL https://doi.org/10.1177/0098628319834174.
  31. Empirical support for a causal relationship between gamification and learning outcomes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, page 1–13, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450356206. doi:10.1145/3173574.3173885. URL https://doi.org/10.1145/3173574.3173885.
  32. Exploring personalization of gamification in an introductory programming course. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, pages 1121–1127, 2021.
  33. RiPPLE: A crowdsourced adaptive platform for recommendation of learning activities. Journal of Learning Analytics, 6(3):91–105, 2019.
  34. Evaluating the quality of learning resources: A learnersourcing approach. IEEE Transactions on Learning Technologies, 14(1):81–92, 2021a. doi:10.1109/TLT.2021.3058644.
  35. Analytics of learning tactics and strategies in an online learnersourcing environment. Journal of Computer Assisted Learning, 39(1):94–112, 2023.
  36. Open learner models for multi-activity educational systems. In Artificial Intelligence in Education: 22nd International Conference, pages 11–17. Springer, 2021b.
  37. Incorporating training, self-monitoring and ai-assistance to improve peer feedback quality. In Proceedings of the Ninth ACM Conference on Learning@ Scale, pages 35–47, 2022a.
  38. Incorporating ai and learning analytics to build trustworthy peer assessment systems. British Journal of Educational Technology, 53(4):844–875, 2022b.
  39. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 3:100074, 2022.
  40. Codewrite: supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education, pages 471–476, 2011.
  41. Crowdsourcing programming assignments with crowdsorcerer. In Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, pages 326–331, 2018.
  42. Crowdsourcing content creation for sql practice. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, pages 349–355, 2020.
  43. Learnersourcing subgoal labels for how-to videos. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pages 405–416, 2015.
  44. Learnersourcing personalized hints. In Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing, pages 1626–1636, 2016.
  45. Axis: Generating explanations at scale with learnersourcing and machine learning. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale, pages 379–388, 2016.
  46. Learnersourcing in the age of ai: Student, educator and machine partnerships for content creation. arXiv preprint, 2023. doi:10.48550/arXiv.2306.06386.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Paul Denny (67 papers)
  2. Hassan Khosravi (12 papers)
  3. Arto Hellas (31 papers)
  4. Juho Leinonen (41 papers)
  5. Sami Sarsa (17 papers)
Citations (18)