Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reference Knowledgeable Network for Machine Reading Comprehension (2012.03709v3)

Published 7 Dec 2020 in cs.CL and cs.AI

Abstract: Multi-choice Machine Reading Comprehension (MRC) as a challenge requires models to select the most appropriate answer from a set of candidates with a given passage and question. Most of the existing researches focus on the modeling of specific tasks or complex networks, without explicitly referring to relevant and credible external knowledge sources, which are supposed to greatly make up for the deficiency of the given passage. Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity. In detail, RekNet refines finegrained critical information and defines it as Reference Span, then quotes explicit knowledge quadruples by the co-occurrence information of Reference Span and candidates. The proposed RekNet is evaluated on three multi-choice MRC benchmarks: RACE, DREAM and Cosmos QA, obtaining consistent and remarkable performance improvement with observable statistical significance level over strong baselines. Our code is available at https://github.com/Yilin1111/RekNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yilin Zhao (17 papers)
  2. Zhuosheng Zhang (125 papers)
  3. Hai Zhao (227 papers)
Citations (5)