Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning (2406.16078v2)

Published 23 Jun 2024 in cs.CL

Abstract: Multi-step reasoning instruction, such as chain-of-thought prompting, is widely adopted to explore better LLMs (LMs) performance. We report on the systematic strategy that LMs employ in such a multi-step reasoning process. Our controlled experiments reveal that LMs rely more heavily on heuristics, such as lexical overlap, in the earlier stages of reasoning, where more reasoning steps remain to reach a goal. Conversely, their reliance on heuristics decreases as LMs progress closer to the final answer through multiple reasoning steps. This suggests that LMs can backtrack only a limited number of future steps and dynamically combine heuristic strategies with rationale ones in tasks involving multi-step reasoning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yoichi Aoki (5 papers)
  2. Keito Kudo (7 papers)
  3. Tatsuki Kuribayashi (31 papers)
  4. Shusaku Sone (5 papers)
  5. Masaya Taniguchi (4 papers)
  6. Keisuke Sakaguchi (44 papers)
  7. Kentaro Inui (119 papers)