Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge (2007.00849v1)

Published 2 Jul 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Massive LLMs are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural LLM that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pat Verga (16 papers)
  2. Haitian Sun (16 papers)
  3. Livio Baldini Soares (18 papers)
  4. William W. Cohen (79 papers)
Citations (50)