Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Language Models with Language Feedback (2204.14146v4)

Published 29 Apr 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Pretrained LLMs often do not perform tasks in ways that are in line with our preferences, e.g., generating offensive text or factually incorrect summaries. Recent work approaches the above issue by learning from a simple form of human evaluation: comparisons between pairs of model-generated task outputs. Comparison feedback conveys limited information about human preferences per human evaluation. Here, we propose to learn from natural language feedback, which conveys more information per human evaluation. We learn from language feedback on model outputs using a three-step learning algorithm. First, we condition the LLM on the initial output and feedback to generate many refinements. Second, we choose the refinement with the highest similarity to the feedback. Third, we finetune a LLM to maximize the likelihood of the chosen refinement given the input. In synthetic experiments, we first evaluate whether LLMs accurately incorporate feedback to produce refinements, finding that only LLMs (175B parameters) do so. Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization ability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jérémy Scheurer (15 papers)
  2. Jon Ander Campos (20 papers)
  3. Jun Shern Chan (8 papers)
  4. Angelica Chen (22 papers)
  5. Kyunghyun Cho (292 papers)
  6. Ethan Perez (55 papers)
Citations (45)
X Twitter Logo Streamline Icon: https://streamlinehq.com