Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAMBADA: Backward Chaining for Automated Reasoning in Natural Language (2212.13894v2)

Published 20 Dec 2022 in cs.AI and cs.LG

Abstract: Remarkable progress has been made on automated reasoning with natural text, by using LLMs (LMs) and methods such as Chain-of-Thought and Selection-Inference. These techniques search for proofs in the forward direction from axioms to the conclusion, which suffers from a combinatorial explosion of the search space, and thus high failure rates for problems requiring longer chains of reasoning. The classical automated reasoning literature has shown that reasoning in the backward direction (i.e. from the intended conclusion to supporting axioms) is significantly more efficient at proof-finding. Importing this intuition into the LM setting, we develop a Backward Chaining algorithm, called LAMBADA, that decomposes reasoning into four sub-modules. These sub-modules are simply implemented by few-shot prompted LM inference. We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods on challenging logical reasoning datasets, particularly when deep and accurate proof chains are required.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mehran Kazemi (26 papers)
  2. Najoung Kim (28 papers)
  3. Deepti Bhatia (5 papers)
  4. Xin Xu (187 papers)
  5. Deepak Ramachandran (28 papers)
Citations (68)