Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference (1811.06203v1)

Published 15 Nov 2018 in cs.CL and cs.AI

Abstract: In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Masashi Yoshikawa (10 papers)
  2. Koji Mineshima (20 papers)
  3. Hiroshi Noji (11 papers)
  4. Daisuke Bekki (15 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.