Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Full-Step-DPO: Self-Supervised Preference Optimization with Step-wise Rewards for Mathematical Reasoning (2502.14356v1)

Published 20 Feb 2025 in cs.CL

Abstract: Direct Preference Optimization (DPO) often struggles with long-chain mathematical reasoning. Existing approaches, such as Step-DPO, typically improve this by focusing on the first erroneous step in the reasoning chain. However, they overlook all other steps and rely heavily on humans or GPT-4 to identify erroneous steps. To address these issues, we propose Full-Step-DPO, a novel DPO framework tailored for mathematical reasoning. Instead of optimizing only the first erroneous step, it leverages step-wise rewards from the entire reasoning chain. This is achieved by training a self-supervised process reward model, which automatically scores each step, providing rewards while avoiding reliance on external signals. Furthermore, we introduce a novel step-wise DPO loss, which dynamically updates gradients based on these step-wise rewards. This endows stronger reasoning capabilities to LLMs. Extensive evaluations on both in-domain and out-of-domain mathematical reasoning benchmarks across various base LLMs, demonstrate that Full-Step-DPO achieves superior performance compared to state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huimin Xu (15 papers)
  2. Xin Mao (48 papers)
  3. Feng-Lin Li (16 papers)
  4. Xiaobao Wu (43 papers)
  5. Wang Chen (36 papers)
  6. Wei Zhang (1489 papers)
  7. Anh Tuan Luu (69 papers)