Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning (2410.22304v1)
Abstract: Mathematical reasoning is a crucial capability for LLMs, yet generating detailed and accurate reasoning traces remains a significant challenge. This paper introduces a novel approach to produce high-quality reasoning traces for LLM fine-tuning using online learning \textbf{Flows}. Our method employs an incremental output production Flow, where component LLMs collaboratively construct solutions through iterative communication. We train the Flow using online Direct Preference Optimization (DPO) learning with rollouts, generating DPO pairs for each training example and updating models in real-time. We directly compare the quality of reasoning traces generated by our method with those produced through direct model inference, demonstrating the effectiveness of our approach in improving LLM performance in mathematical reasoning tasks.
- Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 .
- Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv preprint arXiv:2410.07985 .
- Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 .
- V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457 .
- Step-dpo: Step-wise preference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629 .
- Let’s verify step by step. arXiv preprint arXiv:2305.20050 .
- Mathbench: Evaluating the theory and application proficiency of llms with a hierarchical mathematics benchmark. arXiv preprint arXiv:2405.12209 .
- Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 .
- Mineiro, P. (2024). Online joint fine-tuning of multi-agent flows. arXiv preprint arXiv:2406.04516 .
- Iterative reasoning preference optimization. arXiv preprint arXiv:2404.19733 .
- Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems 36.
- Beyond human data: Scaling self-training for problem-solving with language models. arXiv preprint arXiv:2312.06585 .
- Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
- Multi-step problem solving through a verifier: An empirical analysis on model-induced process supervision. arXiv preprint arXiv:2402.02658 .
- Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 .
- Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 .
- Quiet-star: Language models can teach themselves to think before speaking. arXiv preprint arXiv:2403.09629 .
- Star: Self-taught reasoner. arXiv preprint arXiv:2203.14465 .
- Llama-berry: Pairwise optimization for o1-like olympiad-level mathematical reasoning. arXiv preprint arXiv:2410.02884 .
- Rest-mcts*: Llm self-training via process reward guided tree search. arXiv preprint arXiv:2406.03816 .
- Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624 .