Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Reward and Policy Jointly from Demonstration and Preference Improves Alignment (2406.06874v3)

Published 11 Jun 2024 in cs.AI, cs.HC, and cs.RO

Abstract: Aligning human preference and value is an important requirement for building contemporary foundation models and embodied AI. However, popular approaches such as reinforcement learning with human feedback (RLHF) break down the task into successive stages, such as supervised fine-tuning (SFT), reward modeling (RM), and reinforcement learning (RL), each performing one specific learning task. Such a sequential approach results in serious issues such as significant under-utilization of data and distribution mismatch between the learned reward model and generated policy, which eventually lead to poor alignment performance. We develop a single stage approach named Alignment with Integrated Human Feedback (AIHF), capable of integrating both human preference and demonstration to train reward models and the policy. The proposed approach admits a suite of efficient algorithms, which can easily reduce to, and leverage, popular alignment algorithms such as RLHF and Directly Policy Optimization (DPO), and only requires minor changes to the existing alignment pipelines. We demonstrate the efficiency of the proposed solutions with extensive experiments involving alignment problems in LLMs and robotic control problems in MuJoCo. We observe that the proposed solutions outperform the existing alignment algorithms such as RLHF and DPO by large margins, especially when the amount of high-quality preference data is relatively limited.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chenliang Li (92 papers)
  2. Siliang Zeng (14 papers)
  3. Zeyi Liao (14 papers)
  4. Jiaxiang Li (22 papers)
  5. Dongyeop Kang (72 papers)
  6. Mingyi Hong (172 papers)
  7. Alfredo Garcia (46 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com