Using Large Language Models to Provide Explanatory Feedback to Human Tutors (2306.15498v1)
Abstract: Research demonstrates learners engaging in the process of producing explanations to support their reasoning, can have a positive impact on learning. However, providing learners real-time explanatory feedback often presents challenges related to classification accuracy, particularly in domain-specific environments, containing situationally complex and nuanced responses. We present two approaches for supplying tutors real-time feedback within an online lesson on how to give students effective praise. This work-in-progress demonstrates considerable accuracy in binary classification for corrective feedback of effective, or effort-based (F1 score = 0.811), and ineffective, or outcome-based (F1 score = 0.350), praise responses. More notably, we introduce progress towards an enhanced approach of providing explanatory feedback using LLM-facilitated named entity recognition, which can provide tutors feedback, not only while engaging in lessons, but can potentially suggest real-time tutor moves. Future work involves leveraging LLMs for data augmentation to improve accuracy, while also developing an explanatory feedback interface.
- Jionghao Lin (36 papers)
- Danielle R. Thomas (11 papers)
- Feifei Han (2 papers)
- Shivang Gupta (9 papers)
- Wei Tan (55 papers)
- Ngoc Dang Nguyen (8 papers)
- Kenneth R. Koedinger (21 papers)