Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization (2310.03708v4)

Published 5 Oct 2023 in cs.LG and cs.AI

Abstract: A single LLM, even when aligned with labelers through reinforcement learning from human feedback (RLHF), may not suit all human preferences. Recent approaches therefore prefer customization, gathering multi-dimensional feedback, and creating distinct reward models for each dimension. Different LLMs are then optimized for various preferences using multi-objective RLHF (MORLHF) with varying reward weights. However, RL fine-tuning is unstable and resource-heavy, especially with diverse and usually conflicting objectives. In this paper, we present Multi-Objective Direct Preference Optimization (MODPO), an RL-free extension of Direct Preference Optimization (DPO) for multiple alignment objectives. Essentially, MODPO folds LLMing directly into reward modeling, training LLMs as implicit collective reward models that combine all objectives with specific weights. MODPO theoretically yields the same optimal solutions as MORLHF but is practically more stable and efficient. Empirical results in safety alignment and long-form question answering show that MODPO matches or outperforms existing methods, producing a Pareto front of LLMs catering to diverse preferences with three times less computational resources compared to MORLHF. Code is available at https://github.com/ZHZisZZ/modpo.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhanhui Zhou (13 papers)
  2. Jie Liu (492 papers)
  3. Chao Yang (333 papers)
  4. Jing Shao (109 papers)
  5. Xiangyu Yue (93 papers)
  6. Wanli Ouyang (358 papers)
  7. Yu Qiao (563 papers)
Citations (20)