Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models (2311.06754v1)

Published 12 Nov 2023 in cs.CL

Abstract: Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps. Yet, complex logical reasoning is still cumbersome for LLMs. Based on the dual process theory in cognitive science, we are the first to unravel the cognitive reasoning abilities of LLMs. Our framework employs an iterative methodology to construct a Cognitive Tree (CogTree). The root node of this tree represents the initial query, while the leaf nodes consist of straightforward questions that can be answered directly. This construction involves two main components: the implicit extraction module (referred to as the intuitive system) and the explicit reasoning module (referred to as the reflective system). The intuitive system rapidly generates multiple responses by utilizing in-context examples, while the reflective system scores these responses using comparative learning. The scores guide the intuitive system in its subsequent generation step. Our experimental results on two popular and challenging reasoning tasks indicate that it is possible to achieve a performance level comparable to that of GPT-3.5 (with 175B parameters), using a significantly smaller LLM that contains fewer parameters (<=7B) than 5% of GPT-3.5.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Junbing Yan (10 papers)
  2. Chengyu Wang (93 papers)
  3. Taolin Zhang (34 papers)
  4. Xiaofeng He (33 papers)
  5. Jun Huang (126 papers)
  6. Wei Zhang (1489 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.