Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlocking General Long Chain-of-Thought Reasoning Capabilities of Large Language Models via Representation Engineering (2503.11314v2)

Published 14 Mar 2025 in cs.CL

Abstract: Recent advancements in long chain-of-thoughts(long CoTs) have significantly improved the reasoning capabilities of LLMs(LLMs). Existing work finds that the capability of long CoT reasoning can be efficiently elicited by tuning on only a few examples and can easily transfer to other tasks. This motivates us to investigate whether long CoT reasoning is a general capability for LLMs. In this work, we conduct an empirical analysis for this question from the perspective of representation. We find that LLMs do encode long CoT reasoning as a general capability, with a clear distinction from vanilla CoTs. Furthermore, domain-specific representations are also required for the effective transfer of long CoT reasoning. Inspired by these findings, we propose GLoRE, a novel representation engineering method to unleash the general long CoT reasoning capabilities of LLMs. Extensive experiments demonstrate the effectiveness and efficiency of GLoRE in both in-domain and cross-domain scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xinyu Tang (20 papers)
  2. Xiaolei Wang (44 papers)
  3. Zhihao Lv (1 paper)
  4. Yingqian Min (14 papers)
  5. Wayne Xin Zhao (196 papers)
  6. Binbin Hu (42 papers)
  7. Ziqi Liu (78 papers)
  8. Zhiqiang Zhang (129 papers)

Summary

We haven't generated a summary for this paper yet.