Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multiple-Choice Question Generation: Towards an Automated Assessment Framework (2209.11830v1)

Published 23 Sep 2022 in cs.CL and cs.AI

Abstract: Automated question generation is an important approach to enable personalisation of English comprehension assessment. Recently, transformer-based pretrained LLMs have demonstrated the ability to produce appropriate questions from a context paragraph. Typically, these systems are evaluated against a reference set of manually generated questions using n-gram based metrics, or manual qualitative assessment. Here, we focus on a fully automated multiple-choice question generation (MCQG) system where both the question and possible answers must be generated from the context paragraph. Applying n-gram based approaches is challenging for this form of system as the reference set is unlikely to capture the full range of possible questions and answer options. Conversely manual assessment scales poorly and is expensive for MCQG system development. In this work, we propose a set of performance criteria that assess different aspects of the generated multiple-choice questions of interest. These qualities include: grammatical correctness, answerability, diversity and complexity. Initial systems for each of these metrics are described, and individually evaluated on standard multiple-choice reading comprehension corpora.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Vatsal Raina (19 papers)
  2. Mark Gales (52 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.