Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating the Knowledge Dependency of Questions (2211.11902v1)

Published 21 Nov 2022 in cs.CL

Abstract: The automatic generation of Multiple Choice Questions (MCQ) has the potential to reduce the time educators spend on student assessment significantly. However, existing evaluation metrics for MCQ generation, such as BLEU, ROUGE, and METEOR, focus on the n-gram based similarity of the generated MCQ to the gold sample in the dataset and disregard their educational value. They fail to evaluate the MCQ's ability to assess the student's knowledge of the corresponding target fact. To tackle this issue, we propose a novel automatic evaluation metric, coined Knowledge Dependent Answerability (KDA), which measures the MCQ's answerability given knowledge of the target fact. Specifically, we first show how to measure KDA based on student responses from a human survey. Then, we propose two automatic evaluation metrics, KDA_disc and KDA_cont, that approximate KDA by leveraging pre-trained LLMs to imitate students' problem-solving behavior. Through our human studies, we show that KDA_disc and KDA_soft have strong correlations with both (1) KDA and (2) usability in an actual classroom setting, labeled by experts. Furthermore, when combined with n-gram based similarity metrics, KDA_disc and KDA_cont are shown to have a strong predictive power for various expert-labeled MCQ quality measures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hyeongdon Moon (7 papers)
  2. Yoonseok Yang (4 papers)
  3. Jamin Shin (24 papers)
  4. Hangyeol Yu (10 papers)
  5. Seunghyun Lee (60 papers)
  6. Myeongho Jeong (7 papers)
  7. Juneyoung Park (11 papers)
  8. Minsam Kim (3 papers)
  9. Seungtaek Choi (14 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.