Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies (2010.05384v1)

Published 12 Oct 2020 in cs.CL and cs.AI

Abstract: In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods. First, the quality of the existing DG methods are still far from practical use. There is still room for DG quality improvement. Second, the existing DG designs are mainly for single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. Aiming at these goals, in this paper, we present a new distractor generation scheme with multi-tasking and negative answer training strategies for effectively generating \textit{multiple} distractors. The experimental results show that (1) our model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse and show strong distracting power for multiple choice question.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ho-Lam Chung (13 papers)
  2. Ying-Hong Chan (2 papers)
  3. Yao-Chung Fan (7 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.