Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Step-Controlled DPO: Leveraging Stepwise Error for Enhanced Mathematical Reasoning (2407.00782v3)

Published 30 Jun 2024 in cs.CL

Abstract: Direct Preference Optimization (DPO) has proven effective at improving the performance of LLMs on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to understand reasoning errors and output accurate reasoning steps. We apply SCDPO to both code-integrated and chain-of-thought solutions, empirically showing that it consistently improves the performance compared to naive DPO on three different SFT models, including one existing SFT model and two models we finetuned. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves high scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zimu Lu (10 papers)
  2. Aojun Zhou (45 papers)
  3. Ke Wang (529 papers)
  4. Houxing Ren (16 papers)
  5. Weikang Shi (9 papers)
  6. Junting Pan (30 papers)
  7. Mingjie Zhan (23 papers)
  8. Hongsheng Li (340 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com