Scaling Evidence-based Instructional Design Expertise through Large Language Models (2306.01006v2)
Abstract: This paper presents a comprehensive exploration of leveraging LLMs, specifically GPT-4, in the field of instructional design. With a focus on scaling evidence-based instructional design expertise, our research aims to bridge the gap between theoretical educational studies and practical implementation. We discuss the benefits and limitations of AI-driven content generation, emphasizing the necessity of human oversight in ensuring the quality of educational materials. This work is elucidated through two detailed case studies where we applied GPT-4 in creating complex higher-order assessments and active learning components for different courses. From our experiences, we provide best practices for effectively using LLMs in instructional design tasks, such as utilizing templates, fine-tuning, handling unexpected output, implementing LLM chains, citing references, evaluating output, creating rubrics, grading, and generating distractors. We also share our vision of a future recommendation system, where a customized GPT-4 extracts instructional design principles from educational studies and creates personalized, evidence-supported strategies for users' unique educational contexts. Our research contributes to understanding and optimally harnessing the potential of AI-driven LLMs in enhancing educational outcomes.
- K. Lehnert, Ai insights into theoretical physics and the swampland program: A journey through the cosmos with chatgpt, arXiv preprint arXiv:2301.08155 (2023).
- How useful are educational questions generated by large language models?, arXiv preprint arXiv:2304.06638 (2023).
- Comparing different approaches to generating mathematics explanations using large language models, in: Proceedings of the AIED2023 Conference, 2023. To be published.
- Z. A. Pardos, S. Bhandari, Learning gain differences between chatgpt and human tutor generated algebra hints, arXiv preprint arXiv:2302.06871 (2023).
- Scenario-based training and on-the-job support for equitable mentoring, in: Innovative Approaches to Technology-Enhanced Learning for the Workplace and Higher Education: Proceedings of ‘The Learning Ideas Conference’2022, Springer, 2022, pp. 581–592.
- The expertise reversal effect and worked examples in tutored problem solving, Instructional Science 38 (2010) 289–307.
- Using anticipatory diagrammatic self-explanation to support learning and performance in early algebra., Grantee Submission (2021).
- Instructional complexity and the science to constrain it, Science 342 (2013) 935–937.
- L. Reynolds, K. McDonell, Prompt programming for large language models: Beyond the few-shot paradigm, in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–7.
- Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts, in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–22.
- Gautam Yadav (4 papers)