Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion (2406.19185v1)

Published 27 Jun 2024 in cs.LG

Abstract: Reinforcement Learning (RL) has been used to finetune LLMs using a reward model trained from preference data, to better align with human judgment. The recently introduced direct alignment methods, which are often simpler, more stable, and computationally lighter, can more directly achieve this. However, these approaches cannot optimize arbitrary rewards, and the preference-based ones are not the only rewards of interest for LLMs (eg., unit tests for code generation or textual entailment for summarization, among others). RL-finetuning is usually done with a variation of policy gradient, which calls for on-policy or near-on-policy samples, requiring costly generations. We introduce Contrastive Policy Gradient, or CoPG, a simple and mathematically principled new RL algorithm that can estimate the optimal policy even from off-policy data. It can be seen as an off-policy policy gradient approach that does not rely on important sampling techniques and highlights the importance of using (the right) state baseline. We show this approach to generalize the direct alignment method IPO (identity preference optimization) and classic policy gradient. We experiment with the proposed CoPG on a toy bandit problem to illustrate its properties, as well as for finetuning LLMs on a summarization task, using a learned reward function considered as ground truth for the purpose of the experiments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yannis Flet-Berliac (16 papers)
  2. Nathan Grinsztajn (17 papers)
  3. Florian Strub (39 papers)
  4. Eugene Choi (9 papers)
  5. Chris Cremer (5 papers)
  6. Arash Ahmadian (18 papers)
  7. Yash Chandak (32 papers)
  8. Mohammad Gheshlaghi Azar (31 papers)
  9. Olivier Pietquin (90 papers)
  10. Matthieu Geist (93 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.