Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware Pre-training for KBQA (2308.14436v1)

Published 28 Aug 2023 in cs.CL and cs.IR

Abstract: Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and relations in KBs. However, traditional Pre-trained LLMs (PLMs) are directly pre-trained on large-scale natural language corpus, which poses challenges for them in understanding and representing complex subgraphs in structured KBs. To bridge the gap between texts and structured KBs, we propose a Structured Knowledge-aware Pre-training method (SKP). In the pre-training stage, we introduce two novel structured knowledge-aware tasks, guiding the model to effectively learn the implicit relationship and better representations of complex subgraphs. In downstream KBQA task, we further design an efficient linearization strategy and an interval attention mechanism, which assist the model to better encode complex subgraphs and shield the interference of irrelevant subgraphs during reasoning respectively. Detailed experiments and analyses on WebQSP verify the effectiveness of SKP, especially the significant improvement in subgraph retrieval (+4.08% H@10).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Guanting Dong (46 papers)
  2. Rumei Li (8 papers)
  3. Sirui Wang (31 papers)
  4. Yupeng Zhang (25 papers)
  5. Yunsen Xian (17 papers)
  6. Weiran Xu (58 papers)
Citations (13)