Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Representation Mapping for Relation Detection in Knowledge Base Question Answering (1907.07328v1)

Published 17 Jul 2019 in cs.CL

Abstract: Relation detection is a core step in many natural language process applications including knowledge base question answering. Previous efforts show that single-fact questions could be answered with high accuracy. However, one critical problem is that current approaches only get high accuracy for questions whose relations have been seen in the training data. But for unseen relations, the performance will drop rapidly. The main reason for this problem is that the representations for unseen relations are missing. In this paper, we propose a simple mapping method, named representation adapter, to learn the representation mapping for both seen and unseen relations based on previously learned relation embedding. We employ the adversarial objective and the reconstruction objective to improve the mapping performance. We re-organize the popular SimpleQuestion dataset to reveal and evaluate the problem of detecting unseen relations. Experiments show that our method can greatly improve the performance of unseen relations while the performance for those seen part is kept comparable to the state-of-the-art. Our code and data are available at https://github.com/wudapeng268/KBQA-Adapter.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Peng Wu (119 papers)
  2. Shujian Huang (106 papers)
  3. Rongxiang Weng (26 papers)
  4. Zaixiang Zheng (25 papers)
  5. Jianbing Zhang (29 papers)
  6. Xiaohui Yan (9 papers)
  7. Jiajun Chen (125 papers)
Citations (31)
Github Logo Streamline Icon: https://streamlinehq.com