Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review (2303.13379v2)

Published 17 Mar 2023 in cs.CL, cs.AI, and cs.CY

Abstract: Educational technology innovations leveraging LLMs have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (e.g., question generation, feedback provision, and essay grading), there are concerns regarding the practicality and ethicality of these innovations. Such concerns may hinder future research and the adoption of LLMs-based innovations in authentic educational contexts. To address this, we conducted a systematic scoping review of 118 peer-reviewed papers published since 2017 to pinpoint the current state of research on using LLMs to automate and support educational tasks. The findings revealed 53 use cases for LLMs in automating education tasks, categorised into nine main categories: profiling/labelling, detection, grading, teaching support, prediction, knowledge representation, feedback, content generation, and recommendation. Additionally, we also identified several practical and ethical challenges, including low technological readiness, lack of replicability and transparency, and insufficient privacy and beneficence considerations. The findings were summarised into three recommendations for future studies, including updating existing innovations with state-of-the-art models (e.g., GPT-3/4), embracing the initiative of open-sourcing models/systems, and adopting a human-centred approach throughout the developmental process. As the intersection of AI and education is continuously evolving, the findings of this study can serve as an essential reference point for researchers, allowing them to leverage the strengths, learn from the limitations, and uncover potential research opportunities enabled by ChatGPT and other generative AI models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lixiang Yan (16 papers)
  2. Lele Sha (4 papers)
  3. Linxuan Zhao (10 papers)
  4. Yuheng Li (37 papers)
  5. Roberto Martinez-Maldonado (14 papers)
  6. Guanliang Chen (11 papers)
  7. Xinyu Li (136 papers)
  8. Yueqiao Jin (9 papers)
  9. Dragan Gašević (32 papers)
Citations (153)
X Twitter Logo Streamline Icon: https://streamlinehq.com