Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning (2407.18248v1)

Published 25 Jul 2024 in cs.CL

Abstract: Effective training of LLMs (LMs) for mathematical reasoning tasks demands high-quality supervised fine-tuning data. Besides obtaining annotations from human experts, a common alternative is sampling from larger and more powerful LMs. However, this knowledge distillation approach can be costly and unstable, particularly when relying on closed-source, proprietary LMs like GPT-4, whose behaviors are often unpredictable. In this work, we demonstrate that the reasoning abilities of small-scale LMs can be enhanced through self-training, a process where models learn from their own outputs. We also show that the conventional self-training can be further augmented by a preference learning algorithm called Direct Preference Optimization (DPO). By integrating DPO into self-training, we leverage preference data to guide LMs towards more accurate and diverse chain-of-thought reasoning. We evaluate our method across various mathematical reasoning tasks using different base models. Our experiments show that this approach not only improves LMs' reasoning performance but also offers a more cost-effective and scalable solution compared to relying on large proprietary LMs.

Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning

The paper "Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning" explores methods to enhance the mathematical reasoning capabilities of small-scale LLMs (LMs). The core idea revolves around improving traditional self-training frameworks using a technique called Direct Preference Optimization (DPO). This approach leverages the preference data to refine the model training process, guiding LMs during pseudo-label generation, making their outputs both more accurate and diverse.

The paper places an emphasis on two main aspects: reinforcing the reasoning capabilities of smaller LMs and doing so efficiently compared to large proprietary models. This investigation is motivated by the high computational and economic costs associated with using large models for reasoning tasks, such as Codex, PaLM, and GPT-4. Smaller models offer a more cost-effective alternative but require methodologies to boost their inherent capabilities without significant resources.

Methodology

The authors introduce DPO-augmented self-training as an enhancement over traditional self-training approaches. The method is iterated through two primary steps:

  1. DPO Step: It involves refining a model to produce higher-quality outputs by using an objective based on DPO. This step uses a preference dataset created from multiple outputs generated by the model itself, labeling correct reasoning paths as preferred.
  2. SFT (Supervised Fine-Tuning) Step: Utilizing the improved model from the DPO step, new pseudo-labeled data are generated. These correct and unique rationales are then added to the training set for further fine-tuning.

Additionally, to boost performance on arithmetic tasks, the researchers integrate an external calculator into the reasoning process. They proposed a scaling method for calculator usage in batch inference, thereby overcoming limitations of existing single-batch methods.

Experiments and Results

The authors conducted experiments using Flan-T5 models and Llama models. Three major datasets—GSM8K, MultiArith, and ASDiv—were used for training and evaluation. Notably, the results demonstrate marked improvements for models employing the DPO-augmented self-training over traditional self-training and supervised fine-tuning. For instance, the Flan-T5-Large model saw an accuracy increase from 35.6% (self-training) to 37.4% using the DPO-augmented approach on the GSM8K dataset.

A key observation is the additional performance boost when using the external calculator integration in smaller models: the Flan-T5-Large model reached an accuracy of 40% on GSM8K, surpassing other reported results for comparably sized models. An iterative training regime showed consistent performance gains across different iterations, underscoring the robustness of the proposed method.

Discussion

The integration of DPO into self-training frameworks illustrates an efficient paradigm for enhancing small-scale LMs without the substantial costs tied to larger models. The empirical results suggest that models fine-tuned with DPO can generate higher-quality pseudo-labeled data, leading to continuous improvement with each iteration. This iterative refinement is particularly useful in scenarios with limited access to large annotated datasets.

The research also underscores the significant impact of computational tools during inference. Incorporating an external calculator improved performance by reducing arithmetic errors, a common shortfall in smaller models. This adaptability could have broader implications in improving the precision of LMs in tasks needing intricate, multi-step reasoning beyond arithmetic, such as code generation and complex problem-solving.

Implications and Future Directions

From a practical standpoint, the demonstrated effectiveness of DPO-augmented self-training offers a scalable and economical pathway for enhancing LMs' reasoning abilities. The methods alleviate the need for large-scale annotations and proprietary large models, balancing performance with resource efficiency.

Theoretically, the success of DPO in fine-tuning models using self-generated data offers insights into preference-guided learning. Future research could explore the application of this framework across different domains and tasks. Additionally, integrating knowledge distillation into the iterative DPO-self-training process may further refine model performance, creating a more synergistic approach that leverages both self-improvement and external expert models.

In conclusion, the paper provides valuable contributions by proposing a novel and effective method for improving the chain-of-thought reasoning in small-scale LMs. This work is meaningful both for its immediate practical benefits and for setting a foundational approach that can be built upon in future AI developments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tianduo Wang (5 papers)
  2. Shichen Li (7 papers)
  3. Wei Lu (325 papers)
Citations (7)
Youtube Logo Streamline Icon: https://streamlinehq.com