Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training (2105.04201v2)

Published 10 May 2021 in cs.CL

Abstract: Pre-trained LLMs (PLMs) have achieved great success on Machine Reading Comprehension (MRC) over the past few years. Although the general language representation learned from large-scale corpora does benefit MRC, the poor support in evidence extraction which requires reasoning across multiple sentences hinders PLMs from further advancing MRC. To bridge the gap between general PLMs and MRC, we present REPT, a REtrieval-based Pre-Training approach. In particular, we introduce two self-supervised tasks to strengthen evidence extraction during pre-training, which is further inherited by downstream MRC tasks through the consistent retrieval operation and model architecture. To evaluate our proposed method, we conduct extensive experiments on five MRC datasets that require collecting evidence from and reasoning across multiple sentences. Experimental results demonstrate the effectiveness of our pre-training approach. Moreover, further analysis shows that our approach is able to enhance the capacity of evidence extraction without explicit supervision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fangkai Jiao (19 papers)
  2. Yangyang Guo (45 papers)
  3. Yilin Niu (10 papers)
  4. Feng Ji (75 papers)
  5. Feng-Lin Li (16 papers)
  6. Liqiang Nie (191 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.