Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples (2406.05673v6)

Published 9 Jun 2024 in cs.AI and cs.CL

Abstract: The ability to generate diverse solutions to a given problem is a haLLMark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with LLMs have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning improves reasoning quality but requires vast labeled data, while reward-maximizing reinforcement learning finds top-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample divergent paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across six challenging reasoning tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), GSM8k (math reasoning), and ProntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.

Summary

  • The paper presents the Flow of Reasoning (FoR) method to enhance LLMs' ability to generate diverse solutions with minimal training data.
  • It conceptualizes multi-step reasoning as a Markovian flow on a DAG and leverages Generative Flow Networks to sample varied reasoning paths.
  • Extensive experiments across tasks like BlocksWorld and Rubik’s Cube demonstrate FoR’s superior performance compared to conventional approaches.

The paper "Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples" addresses the challenge of generating diverse solutions in problem-solving, which is an integral part of human creativity. Most existing methods for multi-step reasoning with LLMs focus primarily on improving reasoning accuracy, often overlooking the diversity of viable solutions. This paper introduces the "Flow of Reasoning" (FoR) method to tackle this issue.

FoR is a diversity-seeking fine-tuning approach for LLMs that enhances both reasoning quality and diversity while requiring minimal data. This method conceptualizes multi-step reasoning as a Markovian flow on a Directed Acyclic Graph (DAG) structured reasoning graph. By doing so, it leverages principles from Generative Flow Networks (GFlowNets) to fine-tune LLMs, enabling them to sample diverse reasoning paths. These paths are sampled with probabilities proportional to the unnormalized rewards of the target problems.

The authors conducted extensive experiments that demonstrated FoR's effectiveness with a limited number of training examples, sometimes as few as 15. It significantly outperformed other inference and training methods across various complex problem-solving tasks, including:

  • BlocksWorld: Embodied reasoning.
  • Game24: Mathematical puzzle solving.
  • Rubik's Cube: Spatial reasoning.
  • 1D-ARC: Abstraction reasoning.
  • PrOntoQA: Logical reasoning.

The approach shows promise for enhancing the creativity and robustness of LLMs, potentially aiding applications in scientific discovery and beyond. The results suggest that minimal data can still result in discovering diverse, high-quality solutions across different domains when using the FoR method. The code supporting this research is publicly available on GitHub.

Github Logo Streamline Icon: https://streamlinehq.com