Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Faithful Embeddings for Knowledge Base Queries (2004.03658v3)

Published 7 Apr 2020 in cs.LG, cs.CL, and stat.ML

Abstract: The deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer. However, in practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers. \emph{Query embedding} (QE) techniques have been recently proposed where KB entities and KB queries are represented jointly in an embedding space, supporting relaxation and generalization in KB inference. However, experiments in this paper show that QE systems may disagree with deductive reasoning on answers that do not require generalization or relaxation. We address this problem with a novel QE method that is more faithful to deductive reasoning, and show that this leads to better performance on complex queries to incomplete KBs. Finally we show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haitian Sun (16 papers)
  2. Andrew O. Arnold (9 papers)
  3. Tania Bedrax-Weiss (7 papers)
  4. Fernando Pereira (33 papers)
  5. William W. Cohen (79 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.