Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base (2002.06115v1)

Published 14 Feb 2020 in cs.CL, cs.LG, and stat.ML

Abstract: We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. William W. Cohen (79 papers)
  2. Haitian Sun (16 papers)
  3. R. Alex Hofer (5 papers)
  4. Matthew Siegler (4 papers)
Citations (58)