Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge (2006.06609v3)

Published 11 Jun 2020 in cs.CL, cs.AI, and cs.LG

Abstract: To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained LLMs (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent reasoning over explicit symbolic facts, under a "closed-world" assumption. However, in an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in the parameters of pre-trained LMs. In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements. To do this, we describe a procedure for automatically generating datasets that teach a model new reasoning skills, and demonstrate that models learn to effectively perform inference which involves implicit taxonomic and world knowledge, chaining and counting. Finally, we show that "teaching" models to reason generalizes beyond the training distribution: they successfully compose the usage of multiple reasoning skills in single examples. Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alon Talmor (13 papers)
  2. Oyvind Tafjord (49 papers)
  3. Peter Clark (108 papers)
  4. Yoav Goldberg (142 papers)
  5. Jonathan Berant (107 papers)
Citations (38)