Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sentence-level Reward Model can Generalize Better for Aligning LLM from Human Preference (2503.04793v4)

Published 1 Mar 2025 in cs.CL and cs.LG

Abstract: Learning reward models from human preference datasets and subsequently optimizing LLMs via reinforcement learning has emerged as a fundamental paradigm for aligning LLMs with human preferences. The performance of the reward model plays a crucial role in the effectiveness of alignment. Previous reward models operate at a coarse-grained level, requiring the generation of a complete response to obtain a reward value. The sparse reward may present challenges for downstream reinforcement learning. While recent efforts have attempted to learn token-level reward models, the lack of explicit semantic information makes it difficult to model the credit of every individual token. In this paper, we propose assigning scores to every sentence, introducing an intermediate-grained reward model. By segmenting the complete response into sentences and applying differential operations to reward output at the start and end positions of each sentence, we can effectively model the rewards of sentences. Moreover, a novel attention mechanism is introduced to aggregate the scores of all sentences into a response-level score, which allows it to be trained using the Bradley-Terry model. On common benchmarks, our method outperforms the response-level reward model by 2.7% on RewardBench (for reward modeling evaluation) and surpasses all baselines on AlpacaEval (for alignment evaluation).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wenjie Qiu (7 papers)
  2. Yi-Chen Li (10 papers)
  3. Xuqin Zhang (1 paper)
  4. Tianyi Zhang (262 papers)
  5. Yihang Zhang (18 papers)
  6. Zongzhang Zhang (33 papers)
  7. Yang Yu (385 papers)