Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Clues Before Answers: Generation-Enhanced Multiple-Choice QA (2205.00274v1)

Published 30 Apr 2022 in cs.CL

Abstract: A trending paradigm for multiple-choice question answering (MCQA) is using a text-to-text framework. By unifying data in different tasks into a single text-to-text format, it trains a generative encoder-decoder model which is both powerful and universal. However, a side effect of twisting a generation target to fit the classification nature of MCQA is the under-utilization of the decoder and the knowledge that can be decoded. To exploit the generation capability and underlying knowledge of a pre-trained encoder-decoder model, in this paper, we propose a generation-enhanced MCQA model named GenMC. It generates a clue from the question and then leverages the clue to enhance a reader for MCQA. It outperforms text-to-text models on multiple MCQA datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zixian Huang (6 papers)
  2. Ao Wu (5 papers)
  3. Jiaying Zhou (10 papers)
  4. Yu Gu (218 papers)
  5. Yue Zhao (394 papers)
  6. Gong Cheng (78 papers)
Citations (24)