Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Large Language Models to Provide Explanatory Feedback to Human Tutors (2306.15498v1)

Published 27 Jun 2023 in cs.CL, cs.AI, and cs.HC

Abstract: Research demonstrates learners engaging in the process of producing explanations to support their reasoning, can have a positive impact on learning. However, providing learners real-time explanatory feedback often presents challenges related to classification accuracy, particularly in domain-specific environments, containing situationally complex and nuanced responses. We present two approaches for supplying tutors real-time feedback within an online lesson on how to give students effective praise. This work-in-progress demonstrates considerable accuracy in binary classification for corrective feedback of effective, or effort-based (F1 score = 0.811), and ineffective, or outcome-based (F1 score = 0.350), praise responses. More notably, we introduce progress towards an enhanced approach of providing explanatory feedback using LLM-facilitated named entity recognition, which can provide tutors feedback, not only while engaging in lessons, but can potentially suggest real-time tutor moves. Future work involves leveraging LLMs for data augmentation to improve accuracy, while also developing an explanatory feedback interface.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jionghao Lin (36 papers)
  2. Danielle R. Thomas (11 papers)
  3. Feifei Han (2 papers)
  4. Shivang Gupta (9 papers)
  5. Wei Tan (55 papers)
  6. Ngoc Dang Nguyen (8 papers)
  7. Kenneth R. Koedinger (21 papers)
Citations (9)