Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OPTune: Efficient Online Preference Tuning (2406.07657v1)

Published 11 Jun 2024 in cs.LG and cs.CL

Abstract: Reinforcement learning with human feedback~(RLHF) is critical for aligning LLMs with human preference. Compared to the widely studied offline version of RLHF, \emph{e.g.} direct preference optimization (DPO), recent works have shown that the online variants achieve even better alignment. However, online alignment requires on-the-fly generation of new training data, which is costly, hard to parallelize, and suffers from varying quality and utility. In this paper, we propose a more efficient data exploration strategy for online preference tuning (OPTune), which does not rely on human-curated or pre-collected teacher responses but dynamically samples informative responses for on-policy preference alignment. During data generation, OPTune only selects prompts whose (re)generated responses can potentially provide more informative and higher-quality training signals than the existing responses. In the training objective, OPTune reweights each generated response (pair) by its utility in improving the alignment so that learning can be focused on the most helpful samples. Throughout our evaluations, OPTune'd LLMs maintain the instruction-following benefits provided by standard preference tuning whilst enjoying 1.27-1.56x faster training speed due to the efficient data exploration strategy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lichang Chen (30 papers)
  2. Jiuhai Chen (26 papers)
  3. Chenxi Liu (84 papers)
  4. John Kirchenbauer (21 papers)
  5. Davit Soselia (6 papers)
  6. Chen Zhu (103 papers)
  7. Tom Goldstein (226 papers)
  8. Tianyi Zhou (172 papers)
  9. Heng Huang (189 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets