Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RoR: Read-over-Read for Long Document Machine Reading Comprehension (2109.04780v2)

Published 10 Sep 2021 in cs.CL

Abstract: Transformer-based pre-trained models, such as BERT, have achieved remarkable results on machine reading comprehension. However, due to the constraint of encoding length (e.g., 512 WordPiece tokens), a long document is usually split into multiple chunks that are independently read. It results in the reading field being limited to individual chunks without information collaboration for long document machine reading comprehension. To address this problem, we propose RoR, a read-over-read method, which expands the reading field from chunk to document. Specifically, RoR includes a chunk reader and a document reader. The former first predicts a set of regional answers for each chunk, which are then compacted into a highly-condensed version of the original document, guaranteeing to be encoded once. The latter further predicts the global answers from this condensed document. Eventually, a voting strategy is utilized to aggregate and rerank the regional and global answers for final prediction. Extensive experiments on two benchmarks QuAC and TriviaQA demonstrate the effectiveness of RoR for long document reading. Notably, RoR ranks 1st place on the QuAC leaderboard (https://quac.ai/) at the time of submission (May 17th, 2021).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jing Zhao (86 papers)
  2. Junwei Bao (34 papers)
  3. Yifan Wang (321 papers)
  4. Yongwei Zhou (8 papers)
  5. Youzheng Wu (32 papers)
  6. Xiaodong He (162 papers)
  7. Bowen Zhou (141 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.