Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UFT: Unifying Fine-Tuning of SFT and RLHF/DPO/UNA through a Generalized Implicit Reward Function (2410.21438v1)

Published 28 Oct 2024 in cs.CL and cs.LG

Abstract: By pretraining on trillions of tokens, an LLM gains the capability of text generation. However, to enhance its utility and reduce potential harm, SFT and alignment are applied sequentially to the pretrained model. Due to the differing nature and objective functions of SFT and alignment, catastrophic forgetting has become a significant issue. To address this, we introduce Unified Fine-Tuning (UFT), which integrates SFT and alignment into a single training stage using the same objective and loss functions through an implicit reward function. Our experimental results demonstrate that UFT outperforms SFT on instruction-tuning data alone. Moreover, when combining instruction-tuning data with alignment data, UFT effectively prevents catastrophic forgetting across these two stages and shows a clear advantage over sequentially applying SFT and alignment. This is evident in the significant improvements observed in the \textbf{ifeval} task for instruction-following and the \textbf{truthful-qa} task for factuality. The proposed general fine-tuning framework UFT establishes an effective and efficient pretraining-UFT paradigm for LLM training.

Overview of UFT: Unifying Fine-Tuning of SFT and RLHF/DPO/UNA through a Generalized Implicit Reward Function

The paper presents a novel approach for fine-tuning LLMs with the introduction of Unified Fine-Tuning (UFT). This method aims to mitigate the prevalent issue of catastrophic forgetting that occurs when supervised fine-tuning (SFT) and various alignment techniques such as Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), and Unified Alignment (UNA) are sequentially applied. The authors propose a unified methodology that integrates these processes into a single stage using a generalized implicit reward function, demonstrating superiority in performance across a variety of tasks.

Methodology

UFT combines SFT and alignment into a single training framework, utilizing a shared objective and loss function through an implicit reward model. This contrasts with traditional approaches where SFT and alignment are distinct stages, leading to the phenomenon known as catastrophic forgetting. The authors leverage UNA's ability to process various types of feedback, including pairwise, binary, and score-based feedback. They extend these capabilities to encompass the goals of SFT, thereby ensuring compatibility and synergy between instruction-tuning data and alignment data.

The paper presents a mathematical formulation whereby both SFT and UNA aim to maximize the likelihood of response generation in instruction-tuning data. Through experimentation, it is shown that UFT not only outperforms SFT on instruction-tuning datasets but also effectively prevents catastrophic forgetting when both instruction and alignment data are considered.

Experimental Results

The experimental evaluation demonstrates that UFT consistently surpasses traditional SFT across several tasks, particularly in instruction-following (ifeval) and factuality (truthful-qa) assessments. This is attributed to UFT's dual focus on maximizing reward scores while minimizing divergence from the pretrained model. The results illustrate UFT's effectiveness in maintaining the alignment and instructional capabilities of LLMs, contrasting with the performance declines observed in sequential training paradigms.

When analyzing the impact of instruction-tuning and alignment data distributions, UFT shows optimized performance for both types of data, underscoring the necessity of a balanced dataset for enhancing LLM capabilities. Furthermore, the integration with UNA ensures that UFT can handle the complexities associated with various feedback types, establishing a robust framework for future applications.

Implications and Future Developments

The implications of this research are significant, particularly in the field of natural language processing, due to the potential improvements in both the generation capabilities and ethical alignment of LLMs. By harmonizing SFT and alignment processes, UFT promises to enhance the efficiency and effectiveness of LLM fine-tuning, potentially influencing future methodologies in AI alignment and LLM training.

Future developments may involve exploring the integration of additional feedback mechanisms and optimizing the balance between instruction-tuning and alignment data. Further studies could also investigate UFT's adaptability to different LLM architectures and its applicability across various ethical and instructional contexts in AI.

In conclusion, the proposed UFT methodology signifies a considerable advancement in the fine-tuning landscape for LLMs, addressing key challenges through an innovative, unified approach. The paper contributes a foundational framework that promises to refine and enhance LLM training paradigms, fostering advancements in AI alignment and LLM utility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms, 2024.
  2. AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card, 1, 2024.
  3. A general theoretical paradigm to understand learning from human preferences, 2023.
  4. Open llm leaderboard (2023-2024). https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard, 2023.
  5. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022.
  6. Constitutional ai: Harmlessness from ai feedback, 2022.
  7. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39:324, 1952.
  8. Kto: Model alignment as prospect theoretic optimization, 2024.
  9. Open llm leaderboard v2. https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard, 2024.
  10. Orpo: Monolithic preference optimization without reference model, 2024.
  11. Lora: Low-rank adaptation of large language models, 2021.
  12. Mistral 7b, 2023.
  13. sdpo: Don’t use your data all at once, 2024.
  14. Truthfulqa: Measuring how models mimic human falsehoods, 2022.
  15. Rlaif: Scaling reinforcement learning from human feedback with ai feedback, 2023.
  16. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023.
  17. Gpt-4 technical report, 2024.
  18. Training language models to follow instructions with human feedback, 2022.
  19. Smaug: Fixing failure modes of preference optimisation with dpo-positive, 2024.
  20. Paft: A parallel training paradigm for effective llm fine-tuning, 2024.
  21. From r𝑟ritalic_r to q∗superscript𝑞q^{*}italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT: Your language model is secretly a q-function, 2024.
  22. Direct preference optimization: Your language model is secretly a reward model, 2023.
  23. Offline regularised reinforcement learning for large language models alignment, 2024.
  24. Proximal policy optimization algorithms, 2017.
  25. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
  26. Una: Unifying alignments of rlhf/ppo, dpo and kto by a generalized implicit reward function, 2024.
  27. A comprehensive survey of llm alignment techniques: Rlhf, rlaif, ppo, dpo and more, 2024.
  28. Helpsteer2: Open-source dataset for training top-performing reward models, 2024.
  29. Some things are more cringe than others: Iterative preference optimization with the pairwise cringe loss, 2024.
  30. Self-rewarding language models, 2024.
  31. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
  32. Instruction-following evaluation for large language models, 2023.
  33. Token-level direct preference optimization, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhichao Wang (83 papers)
  2. Bin Bi (24 papers)
  3. Zixu Zhu (1 paper)
  4. Xiangbo Mao (1 paper)
  5. Jun Wang (990 papers)
  6. Shiyu Wang (77 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com