Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Direct Preference-based Policy Optimization without Reward Modeling (2301.12842v3)

Published 30 Jan 2023 in cs.LG and cs.AI

Abstract: Preference-based reinforcement learning (PbRL) is an approach that enables RL agents to learn from preference, which is particularly useful when formulating a reward function is challenging. Existing PbRL methods generally involve a two-step procedure: they first learn a reward model based on given preference data and then employ off-the-shelf reinforcement learning algorithms using the learned reward model. However, obtaining an accurate reward model solely from preference information, especially when the preference is from human teachers, can be difficult. Instead, we propose a PbRL algorithm that directly learns from preference without requiring any reward modeling. To achieve this, we adopt a contrastive learning framework to design a novel policy scoring metric that assigns a high score to policies that align with the given preferences. We apply our algorithm to offline RL tasks with actual human preference labels and show that our algorithm outperforms or is on par with the existing PbRL methods. Notably, on high-dimensional control tasks, our algorithm surpasses offline RL methods that learn with ground-truth reward information. Finally, we show that our algorithm can be successfully applied to fine-tune LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gaon An (5 papers)
  2. Junhyeok Lee (21 papers)
  3. Xingdong Zuo (2 papers)
  4. Norio Kosaka (3 papers)
  5. Kyung-Min Kim (25 papers)
  6. Hyun Oh Song (32 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.