Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering (2202.13296v2)

Published 27 Feb 2022 in cs.CL

Abstract: Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. A desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing. This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model. Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods. Via weakly supervised pre-training as well as the end-to-end fine-tuning, SRl achieves new state-of-the-art performance when combined with NSM, a subgraph-oriented reasoner, for embedding-based KBQA methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jing Zhang (731 papers)
  2. Xiaokang Zhang (42 papers)
  3. Jifan Yu (49 papers)
  4. Jian Tang (327 papers)
  5. Jie Tang (302 papers)
  6. Cuiping Li (42 papers)
  7. Hong Chen (230 papers)
Citations (95)

Summary

We haven't generated a summary for this paper yet.