Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Pretrained Language Models Symbolic Reasoners Over Knowledge? (2006.10413v2)

Published 18 Jun 2020 in cs.CL

Abstract: How can pretrained LLMs (PLMs) learn factual knowledge from the training set? We investigate the two most important mechanisms: reasoning and memorization. Prior work has attempted to quantify the number of facts PLMs learn, but we present, using synthetic data, the first study that investigates the causal relation between facts present in training and facts learned by the PLM. For reasoning, we show that PLMs seem to learn to apply some symbolic reasoning rules correctly but struggle with others, including two-hop reasoning. Further analysis suggests that even the application of learned reasoning rules is flawed. For memorization, we identify schema conformity (facts systematically supported by other facts) and frequency as key factors for its success.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nora Kassner (22 papers)
  2. Benno Krojer (8 papers)
  3. Hinrich Schütze (250 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com