Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 133 tok/s
Gemini 3.0 Pro 55 tok/s Pro
Gemini 2.5 Flash 164 tok/s Pro
Kimi K2 202 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Chain-of-Thought Reasoning

Updated 16 November 2025
  • Chain-of-Thought reasoning is a technique that decomposes complex problems into sequential, intermediate rationales, enhancing multi-step inference in LLMs.
  • Empirical benchmarks show that CoT improves performance by 15–40 accuracy points on tasks such as arithmetic, logic, and commonsense reasoning.
  • Variants like KD+CoT, LaRS, and SoftCoT further optimize inference efficiency by distilling multi-step reasoning into compact, robust model outputs.

Chain-of-Thought (CoT) reasoning is a prompting and modeling strategy that conditions LLMs to decompose complex tasks into explicit sequences of intermediate rationales prior to generating a final answer. Rather than mapping questions directly to answers, CoT prompting introduces additional linguistic structure, inviting the model to “think aloud” through multiple steps. This mechanistic shift activates latent multi-step inference capabilities, and empirical studies have demonstrated marked improvements in LLM performance across arithmetic, logic, commonsense, and algorithmic benchmarks.

1. Foundations and Formalization

Chain-of-Thought prompting is operationalized by augmenting a model’s input with a series of natural-language reasoning steps, denoted as rationales R=(r1,r2,,rk)R = (r_1, r_2, …, r_k), followed by an answer AA. Formally, for input question QQ, the joint conditional generation is factorized as:

P(R,AQ)=P(RQ)P(AQ,R)P(R, A \mid Q) = P(R \mid Q) \cdot P(A \mid Q, R)

where P(RQ)P(R \mid Q) models the process of generating the rationale chain, and P(AQ,R)P(A \mid Q, R) conditions the answer on both the original input and the generated reasoning trace. In practice, CoT can be applied via zero-shot instructions (e.g., “Let’s think step by step”) or via few-shot in-context demonstrations, where each exemplar includes both a question and its stepwise solution.

A deeper formalization treats CoT as a constrained maximum-likelihood decoding problem, where inference becomes:

(S,A)=argmax(S,A)CPθ(S,AQ,CoT_instr)(S^*, A^*) = \arg\max_{(S, A) \in \mathcal{C}} P_\theta(S, A \mid Q, \text{CoT\_instr})

with C\mathcal{C} the set of all sequences that match the reasoning-step format. The LLM then preferentially outputs high-likelihood multi-step traces that resemble those seen during training.

2. Empirical Effectiveness and Benchmarking

Empirical studies have established CoT’s effectiveness across a wide spectrum of reasoning tasks. On the BIG-Bench-Hard (BBH) suite—a collection of 27 challenging natural language reasoning and understanding tasks—vanilla few-shot CoT prompting provides substantial gains over direct question-answering. For example, Qwen-1.8B baseline achieves 17.77% accuracy, which rises to 24.44% (+37.5% relative gain) with KD+CoT distillation from a Qwen-7B teacher (Do et al., 7 Nov 2025). In Llama2 settings, CoT-based distillation yields 5–7 point improvements even when vanilla KD stagnates.

Broader surveys demonstrate similar performance boosts. In mathematical QA (e.g., GSM8K, SVAMP, MATHQA), conventional natural-language CoT, program-of-thought (PoT), and symbolic-aided chains yield 15–40 point accuracy increases over non-CoT baselines, often matching or exceeding the capabilities of much larger non-CoT-tuned models (Chu et al., 2023, Jie et al., 2023, Nguyen et al., 17 Aug 2025). Self-consistency and ensembling methods further extend these gains.

3. Distillation and Knowledge Transfer

Recent work has leveraged CoT in knowledge distillation (KD), particularly for the transfer of multi-step reasoning capability from large “teacher” LLMs to smaller “student” models. White-box KD+CoT operates by matching the student’s next-token distributions to the teacher’s across all tokens in the input, rationale, and answer, typically under a pure KL objective:

LKD=KL[pt(xCoT)    ps(xCoT)]L_\mathrm{KD} = \mathrm{KL}\big[ p_t(\cdot|x_\mathrm{CoT}) \;\|\; p_s(\cdot|x_\mathrm{CoT}) \big]

where xCoTx_\mathrm{CoT} is the input concatenated with rationale steps and answer. This approach encodes the teacher’s multi-step inductive biases directly into the student, guiding internalization of intermediate reasoning patterns rather than superficial output replication. Empirical results show that KD+CoT closes approximately half the performance gap to the teacher, with no increase in model size or inference latency (Do et al., 7 Nov 2025).

4. Selection, Compression, and Efficiency

Algorithmic refinements exploit various mechanisms to improve CoT’s efficiency and selectivity. Latent Reasoning Skills (LaRS) formulates rationale selection as unsupervised latent-embedding matching, where demonstrations are chosen by cosine similarity of inferred latent “skill” vectors between question and rationale (Xu et al., 2023). In practice, LaRS reduces retrieval time by 4×4\times and halves LLM inference calls for prompt selection, outperforming purely question-based retrieval on TabMWP, GSM8K, Spider, and COGS.

Stepwise perplexity-guided refinement (SPIRIT) identifies and prunes low-importance steps by measuring the increase in sequence perplexity upon removal or merging. The critical step score is:

ΔPj=PPL(x,seqrj)PPL(x,seq)\Delta P_j = \mathrm{PPL}(x, \mathrm{seq} \setminus r^j) - \mathrm{PPL}(x, \mathrm{seq})

Only steps with substantial ΔPj\Delta P_j are retained. Experimental results on DeepMind Math and MetaMathQA show that pruning reduces token count by 25%25\%70%70\% while maintaining accuracy; merging further restores coherence in reduced chains (Cui et al., 18 Feb 2025).

SoftCoT extends CoT to continuous-space reasoning, generating instance-specific “soft thought” token embeddings in latent space, projected into the backbone LLM via a trainable linear module, thus circumventing full model fine-tuning and catastrophic forgetting. On GSM8K and related benchmarks, SoftCoT outperforms hard token CoT and full continuous CoT-encoded models (Xu et al., 17 Feb 2025).

5. Faithfulness, Reliability, and Cognitive Perspectives

Despite performance improvements, CoT reasoning faces challenges in reliability and faithfulness. Studies demonstrate that LLMs often produce unfaithful or post-hoc rationalizations in CoT—yielding logically inconsistent or shortcut chain traces even in unbiased, natural prompts (Arcuschin et al., 11 Mar 2025). Confirmation bias is prominent: a model's internal prior over answer choices can influence both the generation of rationales and the subsequent interpretation of those rationales, sometimes overriding explicit reasoning cues. Empirical correlations show that strong model beliefs suppress rationale informativeness, and that CoT effectiveness varies with task “vulnerability” to such biases (Wan et al., 14 Jun 2025).

Mechanistic studies have explored the neural encoding of CoT reliability by probing attention head activations for veracity signals (Chen et al., 14 Jul 2025), and utilizing representation-space interventions (“RoT”) derived from Hopfieldian cognitive theory to localize and correct reasoning errors (Hu et al., 4 Oct 2024). Robust faithfulness further depends on high-quality rationale selection and external verification. PAC-learning frameworks specify sample complexity bounds for learning verifiers capable of filtering faulty natural-language chains with formal guarantees (Balcan et al., 28 May 2025).

6. Structural Variants and Multimodal Extensions

The CoT paradigm has expanded to structured and multi-modal formats. Symbolic-Aided Chain-of-Thought (SA-CoT) augments few-shot prompts with lightweight symbolic scaffolding, embedding atomic logical operators (RuleMatching, RuleInference, KBUpdating) to constrain model reasoning in a parseable, transparent scaffold. SA-CoT achieves up to 22%22\% absolute accuracy gains over conventional CoT for logical QA (Nguyen et al., 17 Aug 2025). Quasi-symbolic abstraction (QuaSAR) guides LLMs to extract relevant predicates, variables, and constants, improving robustness and transfer on adversarial and symbolic tasks (Ranaldi et al., 18 Feb 2025).

In vision-language reasoning, CoT enables interpretability and performance via decomposition into “Description then Decision” stages, effectively bridging visual and textual domains in benchmarks such as Winoground, VQAv2, and MSCOCO. Chain-of-Thought prompt tuning for vision-LLMs leverages stepwise prompt chaining and learned meta-net visual biases to enhance zero-shot, few-shot, and domain generalization (Wu et al., 2023, Ge et al., 2023).

7. Controversies, Theoretical Perspectives, and Future Directions

Theoretical critiques posit that CoT does not elicit genuine abstract reasoning but instead serves as a tight constraint to guide imitation of high-likelihood multi-step text patterns; thus, performance gains may reflect pattern-matching over reasoning per se (Shao et al., 3 Jun 2025). Empirical evidence for “emergent reasoning” remains inseparable from the combinatorial expansion of the permissible output set under the stepwise format.

Open questions persist regarding structural generalization, task faithfulness, verification, and the interplay between pretrained priors and in-context cues (Yang et al., 1 Sep 2025, Chu et al., 2023). Future research will focus on hybrid architectures that combine LLM inductive capacity with explicit symbolic machinery, more robust rationales, improved verification, and extension to multimodal and interactive reasoning.


Chain-of-Thought reasoning provides a powerful methodological lens for eliciting, transferring, and auditing complex reasoning in LLMs. Its continued evolution will be shaped by advances in selection, distillation, verification, and the articulation of its limits in abstraction and faithfulness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Chain of Thought Reasoning.