Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Teaching Machines to Read and Comprehend with Large-Scale Multi-Subject Question-Answering Data (2102.01226v2)

Published 1 Feb 2021 in cs.CL

Abstract: In spite of much recent research in the area, it is still unclear whether subject-area question-answering data is useful for machine reading comprehension (MRC) tasks. In this paper, we investigate this question. We collect a large-scale multi-subject multiple-choice question-answering dataset, ExamQA, and use incomplete and noisy snippets returned by a web search engine as the relevant context for each question-answering instance to convert it into a weakly-labeled MRC instance. We then propose a self-teaching paradigm to better use the generated weakly-labeled MRC instances to improve a target MRC task. Experimental results show that we can obtain +5.1% in accuracy on a multiple-choice MRC dataset, C3, and +3.8% in exact match on an extractive MRC dataset, CMRC 2018 over state-of-the-art MRC baselines, demonstrating the effectiveness of our framework and the usefulness of large-scale subject-area question-answering data for different types of machine reading comprehension tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Dian Yu (78 papers)
  2. Kai Sun (317 papers)
  3. Dong Yu (329 papers)
  4. Claire Cardie (74 papers)
Citations (5)