Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preference-grounded Token-level Guidance for Language Model Fine-tuning (2306.00398v2)

Published 1 Jun 2023 in cs.CL

Abstract: Aligning LLMs (LMs) with preferences is an important problem in natural language generation. A key challenge is that preferences are typically provided at the sequence level while LM training and generation both occur at the token level. There is, therefore, a granularity mismatch between the preference and the LM training losses, which may complicate the learning problem. In this paper, we address this issue by developing an alternate training process, where we iterate between grounding the sequence-level preference into token-level training guidance, and improving the LM with the learned guidance. For guidance learning, we design a framework that extends the pairwise-preference learning in imitation learning to both variable-length LM generation and the utilization of the preference among multiple generations. For LM training, based on the amount of supervised data, we present two minimalist learning objectives that utilize the learned guidance. In experiments, our method performs competitively on two distinct representative LM tasks -- discrete-prompt generation and text summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shentao Yang (10 papers)
  2. Shujian Zhang (28 papers)
  3. Congying Xia (32 papers)
  4. Yihao Feng (35 papers)
  5. Caiming Xiong (337 papers)
  6. Mingyuan Zhou (161 papers)
Citations (16)
Github Logo Streamline Icon: https://streamlinehq.com