Averaging log-likelihoods in direct alignment (2406.19188v1)
Abstract: To better align LLMs with human judgment, Reinforcement Learning from Human Feedback (RLHF) learns a reward model and then optimizes it using regularized RL. Recently, direct alignment methods were introduced to learn such a fine-tuned model directly from a preference dataset without computing a proxy reward function. These methods are built upon contrastive losses involving the log-likelihood of (dis)preferred completions according to the trained model. However, completions have various lengths, and the log-likelihood is not length-invariant. On the other side, the cross-entropy loss used in supervised training is length-invariant, as batches are typically averaged token-wise. To reconcile these approaches, we introduce a principled approach for making direct alignment length-invariant. Formally, we introduce a new averaging operator, to be composed with the optimality operator giving the best policy for the underlying RL problem. It translates into averaging the log-likelihood within the loss. We empirically study the effect of such averaging, observing a trade-off between the length of generations and their scores.
- Nathan Grinsztajn (17 papers)
- Yannis Flet-Berliac (16 papers)
- Mohammad Gheshlaghi Azar (31 papers)
- Florian Strub (39 papers)
- Bill Wu (1 paper)
- Eugene Choi (9 papers)
- Chris Cremer (5 papers)
- Arash Ahmadian (18 papers)
- Yash Chandak (32 papers)
- Olivier Pietquin (90 papers)
- Matthieu Geist (93 papers)