Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-tuning Language Models with Generative Adversarial Reward Modelling (2305.06176v3)

Published 9 May 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Reinforcement Learning with Human Feedback (RLHF) has been demonstrated to significantly enhance the performance of LLMs by aligning their outputs with desired human values through instruction tuning. However, RLHF is constrained by the expertise and productivity limitations of human evaluators. A response to this downside is to fall back to supervised fine-tuning (SFT) with additional carefully selected expert demonstrations. However, while this method has been proven to be effective, it invariably also leads to increased human-in-the-loop overhead. In this study, we propose another alternative approach: Reinforcement Learning with Generative Adversarial Feedback (RLGAF) to RLHF and SFT, which uses a generative adversarial training style to enable the LLMs to learn useful human expert demonstrations without being directly exposed to the training examples, thus enabling good generalization capabilities while preserving sample efficiency. Our preliminary findings indicate that RLGAF can help align LLMs outputs with competitive performance against RLHF and SFT, while not suffering from their respective inherent restrictions, suggesting promising avenues for further research on automating AI alignment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zhang Ze Yu (2 papers)
  2. Lau Jia Jaw (1 paper)
  3. Zhang Hui (3 papers)
  4. Bryan Kian Hsiang Low (77 papers)
Citations (2)