Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks (2404.14723v1)

Published 23 Apr 2024 in cs.CL
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks

Abstract: LLMs have demonstrated remarkable performance across a spectrum of tasks. Recently, Direct Preference Optimization (DPO) has emerged as an RL-free approach to optimize the policy model on human preferences. However, several limitations hinder the widespread adoption of this method. To address these shortcomings, various versions of DPO have been introduced. Yet, a comprehensive evaluation of these variants across diverse tasks is still lacking. In this study, we aim to bridge this gap by investigating the performance of alignment methods across three distinct scenarios: (1) keeping the Supervised Fine-Tuning (SFT) part, (2) skipping the SFT part, and (3) skipping the SFT part and utilizing an instruction-tuned model. Furthermore, we explore the impact of different training sizes on their performance. Our evaluation spans a range of tasks including dialogue systems, reasoning, mathematical problem-solving, question answering, truthfulness, and multi-task understanding, encompassing 13 benchmarks such as MT-Bench, Big Bench, and Open LLM Leaderboard. Key observations reveal that alignment methods achieve optimal performance with smaller training data subsets, exhibit limited effectiveness in reasoning tasks yet significantly impact mathematical problem-solving, and employing an instruction-tuned model notably influences truthfulness. We anticipate that our findings will catalyze further research aimed at developing more robust models to address alignment challenges.

Exploration of Direct Preference Optimization and Its Variants in Optimizing Human Preferences in LLMs

Introduction

Inevaluating the effectiveness of various alignment methods on LLMs, this paper scrutinizes Direct Preference Optimization (DPO) alongside related iterations like IPO, KTO, and CPO. This comparison spans several tasks, testing the utility of different alignment strategies beyond standard Supervised Fine-Tuning (SFT) in contexts such as dialogue systems, reasoning capabilities, mathematical problem-solving, truthfulness, and multi-task performance.

Analysis of Alignment Methods

Different RL-free alignment methods, including DPO, IPO, KTO, and CPO, are evaluated for their capacity to optimize models without the complexity of reinforcement learning algorithms. Each method adjusts the policy model's preferences based on varying strategies:

  • DPO: Focuses on the preference likelihood between chosen and unchosen responses by optimizing a distinct loss function that involves the sigmoid function and log odds of policy model probabilities.
  • IPO: Provides a more comprehensive objective that aims to rectify issues like overfitting in DPO by enforcing a squared error minimization on the utility differences.
  • KTO: Inspired by prospect theory, which does not necessitate dual preferences and uses utility outcomes directly to align the model.
  • CPO: Streamlines the DPO process by excluding the reference model from training, alleviating memory overheads and enhancing computational efficiency.

Experiments and Outcomes

The experimentation phase examines three scenarios:

  1. Fine-tuning SFT models: Here, the paper found that KTO generally excels against other methods, particularly in mathematical tasks.
  2. Direct tuning of pre-trained models: Contrary to what might be expected, KTO and CPO demonstrate capable performance even without the SFT pre-phase, matching what is observable with SFT models in dialogue systems, as measured by MT-Bench.
  3. Using instruction-tuned models: Perhaps the most striking assertion of this research is noticeable here, where alignment methods significantly affect truthfulness metrics.

Key experimental metrics derived from multiple respected benchmarks (such as MT-Bench, GSM8K, and TruthfulQA) illustrate a profound influence by alignment methods, albeit with variable dependency on factors like task type and training data size. Across varying evaluations, the performance susceptibility to data volume is clear, with smaller subsets favoring better outcomes.

Discussion on Practical and Theoretical Implications

This systematic investigation into alignment methods sheds light on their scalability, efficiency, and effectiveness, fostering a deeper understanding of their operability and limitations in real-world applications. The observation that instructional tuning notably enhances truthfulness presents a valuable pathway for further explorations into making LLMs more honest and reliable interlocutors. Additionally, the findings contribute to ongoing discussions about the necessity and efficiency of SFT phases in the alignment process, offering tangible alternatives for refinement through models like KTO and IPO.

Future Directions

The outcomes underscore a necessity for continued research in alignment mechanisms, especially across broader and more complex datasets and tasks. Future work could extend these initial findings into domains critically needing robust alignment, such as automated content generation and interactive systems requiring nuanced human-like understanding. The dialogue opened by these comparisons between SFT-based and direct-tuned models also prompts a richer analysis of training methodologies and their impact on the generalizability and adaptability of LLMs across varied applications.

In sum, this paper not only clarifies the operational terrain of newer RL-free alignment methods but also points toward their nuanced applicabilities and limitations, crafting a roadmap for future research aimed at optimizing LLM alignments with human preferences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Palm 2 technical report.
  2. Program synthesis with large language models.
  3. A general theoretical paradigm to understand learning from human preferences.
  4. Open llm leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
  5. BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.
  6. Piqa: Reasoning about physical commonsense in natural language.
  7. Heejong Bong and Alessandro Rinaldo. 2022. Generalized results for the existence and consistency of the mle in the bradley-terry-luce model.
  8. Language models are few-shot learners.
  9. Sparks of artificial general intelligence: Early experiments with gpt-4.
  10. Evaluating large language models trained on code.
  11. Self-play fine-tuning converts weak language models to strong language models.
  12. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
  13. Palm: Scaling language modeling with pathways.
  14. Deep reinforcement learning from human preferences.
  15. Boolq: Exploring the surprising difficulty of natural yes/no questions.
  16. Think you have solved question answering? try arc, the ai2 reasoning challenge.
  17. Training verifiers to solve math word problems. CoRR, abs/2110.14168.
  18. Training verifiers to solve math word problems.
  19. Enhancing chat language models by scaling high-quality instructional conversations.
  20. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306.
  21. Measuring massive multitask language understanding.
  22. Lora: Low-rank adaptation of large language models.
  23. Mistral 7b.
  24. Solving quantitative reasoning problems with language models.
  25. Competition-level code generation with alphacode. Science, 378(6624):1092–1097.
  26. Truthfulqa: Measuring how models mimic human falsehoods.
  27. Statistical rejection sampling improves preference optimization.
  28. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP.
  29. Training language models to follow instructions with human feedback.
  30. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
  31. Scaling language models: Methods, analysis & insights from training gopher.
  32. Direct preference optimization: Your language model is secretly a reward model.
  33. Winogrande: An adversarial winograd schema challenge at scale.
  34. Multitask prompted training enables zero-shot task generalization.
  35. Proximal policy optimization algorithms.
  36. Llama 2: Open foundation and fine-tuned chat models.
  37. Zephyr: Direct distillation of lm alignment.
  38. AMOS TVERSKY and DANIEL KAHNEMAN. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5(4):297–323.
  39. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl.
  40. Chain of thought prompting elicits reasoning in large language models. CoRR, abs/2201.11903.
  41. Pairwise proximal policy optimization: Harnessing relative feedback for llm alignment.
  42. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417.
  43. Rrhf: Rank responses to align language models with human feedback without tears.
  44. Hellaswag: Can a machine really finish your sentence?
  45. Slic-hf: Sequence likelihood calibration with human feedback.
  46. Judging llm-as-a-judge with mt-bench and chatbot arena.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Amir Saeidi (8 papers)
  2. Shivanshu Verma (2 papers)
  3. Chitta Baral (152 papers)
Citations (15)
X Twitter Logo Streamline Icon: https://streamlinehq.com