Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers (2505.19245v1)

Published 25 May 2025 in cs.LG and cs.AI

Abstract: Chain-of-Thought (CoT) and Looped Transformers have been shown to empirically improve performance on reasoning tasks and to theoretically enhance expressivity by recursively increasing the number of computational steps. However, their comparative capabilities are still not well understood. In this paper, we provide a formal analysis of their respective strengths and limitations. We show that Looped Transformers can efficiently simulate parallel computations for deterministic tasks, which we formalize as evaluation over directed acyclic graphs. In contrast, CoT with stochastic decoding excels at approximate inference for compositional structures, namely self-reducible problems. These separations suggest the tasks for which depth-driven recursion is more suitable, thereby offering practical cues for choosing between reasoning paradigms.

To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers

The paper "To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers" by Xu and Sato offers an analytical exploration of two prominent paradigms in transformer-based reasoning: Chain-of-Thought (CoT) and Looped Transformers. Both methodologies have demonstrated empirical improvements in performance on reasoning tasks but differ fundamentally in their computational structures. This paper presents a formal analysis of their strengths and limitations, providing clarity on the suitability of each paradigm for different categories of tasks.

Transformer models have undergone substantial evolution since their inception, broadening their applicability to include complex reasoning tasks. Chain-of-Thought prompting, which involves the generation of intermediate reasoning steps, enhances the transformers' ability to solve such tasks by effectively leveraging inference computation to break down problems into sequential decisions. This strategy aligns with how humans typically approach problem-solving, making it a natural augmentation to traditional transformer architectures.

In contrast, Looped Transformers introduce a notion of architectural recursion by repeatedly applying transformer layers, enabling the model to maintain context and progressively refine embeddings across loop iterations. This recursive operation enhances expressivity, allowing for increased depth of computation without extending input sequences and supports parallel computation efficiently.

Key Numerical Results and Claims

The paper employs directed acyclic graphs (DAGs) to formalize deterministic computation processes, analyzing the computational benefits associated with each reasoning paradigm. The authors demonstrate that CoT, inherently sequential, excels at depth-driven recursion suited for problems with sequential layers of complexity. By explicitly generating intermediate tokens, CoT models can simulate complex functions with a number of steps proportional to the size of the computation graph.

Conversely, Looped Transformers enable parallel computation over DAGs by evaluating nodes in terms of graph layers rather than sequentially. This architectural recursion allows looped models to efficiently simulate computation tasks requiring parallel solutions. The authors established that, under certain conditions such as linear parallel space complexity, Looped Transformers can simulate computations within a number of loop iterations proportional to the depth of the graph, offering a significant advantage over CoT in tasks amenable to parallel evaluation.

The paper further extends its analysis by contrasting the distinct expressivity limitations of deterministic CoT and Looped Transformers under equal computational constraints. They offer a rigorous comparison against Boolean circuits, establishing Looped Transformers' superiority in handling computations within polylogarithmic time-depth, such as those harnessed by nonuniform circuit classes like NCk. However, the stochastic nature of CoT prompting is shown to better facilitate probabilistic reasoning tasks.

Implications and Future Developments

The implications of this research are manifold. Academically, it elucidates the underlying complexity distinctions between CoT and Looped Transformers, revealing their respective suitability across different computational tasks. Practically, these insights guide the design and deployment of transformer models tailored to specific applications, whether deterministic problem-solving or probabilistic inference.

Future developments will likely explore extending the analytical framework to incorporate additional computation paradigms, such as multitask learning, and consider the impact of these findings on scaling inference-time computation. Questions remain surrounding how hybrid models could be structured to harness both sequential and parallel computation strengths.

In conclusion, the paper provides valuable depth to understanding CoT and Looped Transformers' computational characteristics. By dissecting their respective expressivity and efficiencies, the authors offer frameworks to inform the choice of reasoning paradigm, representing a meaningful advance in the theoretical foundation undergirding transformer model development and application.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Kevin Xu (21 papers)
  2. Issei Sato (82 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com