Dice Question Streamline Icon: https://streamlinehq.com

Expanding discrete chain-of-thought to complex reasoning

Determine principled methods to expand discrete chain-of-thought (CoT) reasoning with textual tokens in autoregressive large language models so that these models can reliably solve more complex reasoning problems (e.g., multi-hop reasoning and planning at larger scales) that current discrete CoT techniques fail to address.

Information Square Streamline Icon: https://streamlinehq.com

Background

LLMs can be strengthened by chain-of-thought techniques that produce intermediate textual reasoning steps. Despite strong performance on many tasks, models using discrete CoT struggle with more sophisticated reasoning and planning problems. The paper proposes continuous CoT (Coconut) and shows theoretical and empirical advantages on directed graph reachability, motivating the question of how to improve the discrete CoT paradigm itself.

The open problem concerns finding mechanisms or augmentations to discrete CoT that enable reliable performance on complex reasoning tasks, closing the gap observed between discrete CoT and continuous CoT approaches.

References

It remains an open problem how to expand existing discrete CoT to solve more complex reasoning problems.

Reasoning by Superposition: A Theoretical Perspective on Chain of Continuous Thought (2505.12514 - Zhu et al., 18 May 2025) in Section 1 (Introductions)