Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering (2110.04330v2)

Published 8 Oct 2021 in cs.CL and cs.LG

Abstract: Current Open-Domain Question Answering (ODQA) model paradigm often contains a retrieving module and a reading module. Given an input question, the reading module predicts the answer from the relevant passages which are retrieved by the retriever. The recent proposed Fusion-in-Decoder (FiD), which is built on top of the pretrained generative model T5, achieves the state-of-the-art performance in the reading module. Although being effective, it remains constrained by inefficient attention on all retrieved passages which contain a lot of noise. In this work, we propose a novel method KG-FiD, which filters noisy passages by leveraging the structural relationship among the retrieved passages with a knowledge graph. We initiate the passage node embedding from the FiD encoder and then use graph neural network (GNN) to update the representation for reranking. To improve the efficiency, we build the GNN on top of the intermediate layer output of the FiD encoder and only pass a few top reranked passages into the higher layers of encoder and decoder for answer generation. We also apply the proposed GNN based reranking method to enhance the passage retrieval results in the retrieving module. Extensive experiments on common ODQA benchmark datasets (Natural Question and TriviaQA) demonstrate that KG-FiD can improve vanilla FiD by up to 1.5% on answer exact match score and achieve comparable performance with FiD with only 40% of computation cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Donghan Yu (18 papers)
  2. Chenguang Zhu (100 papers)
  3. Yuwei Fang (31 papers)
  4. Wenhao Yu (139 papers)
  5. Shuohang Wang (69 papers)
  6. Yichong Xu (42 papers)
  7. Xiang Ren (194 papers)
  8. Yiming Yang (151 papers)
  9. Michael Zeng (76 papers)
Citations (84)