Aligning LLMs to Human Preferences through Dove: A Framework for Joint Preference Optimization
Introduction
Alignment of LLMs with human preferences is critical for their effective application across a range of tasks. Current alignment techniques, such as Direct Preference Optimization (DPO), primarily rely on acquiring conditional preference rankings based on generating multiple responses to a single instruction. This approach, however, captures a constrained view of human preferences, limiting the preference space to comparisons where responses are generated for identical instructions. This work introduces a novel alignment framework, Dove, which extends the paradigm to joint preferences over instruction-response pairs, enabling a richer apprehension of human preference dimensions not captured by conditional rankings alone.
Joint Preference Acquisition Protocol
This research revisits the traditional conditional preference acquisition paradigm, proposing joint preference acquisition over instruction-response pairs. This approach allows comparison between instruction-response pairs with non-identical instructions, thereby illuminating a broader spectrum of human preference reasoning. Through this method, preferences are acquired by considering pairs of responses to distinct instructions, extending preference elicitation beyond the constraints of identical contexts.
The Dove framework capitalizes on this by proposing an alignment objective that prioritizes the joint probability of chosen instruction-response pairs over the less preferred ones. Notably, this joint preference optimization bridges the gap between existing conditional preference optimization techniques and a more holistic preference acquisition methodology, capturing a diverse array of human evaluative dimensions.
Results and Implications
The empirical evaluation demonstrates Dove's superiority over traditional methods, including DPO, in aligning LLMs with human preferences. When applied to summarization and open-ended dialogue tasks, Dove achieved significant improvements, with win rates surpassing those of LLMs aligned with DPO by 5.2% and 3.3% on the respective tasks. These findings underscore the effectiveness of leveraging joint preferences for a more comprehensive alignment of LLM outputs with human preferences.
Moreover, this work’s exploration into joint preference optimization unveils new paths for preference elicitation, hitherto veiled by conventional alignment protocols based on conditional preference rankings. It encourages a reevaluation of preference acquisition paradigms to foster the development of LLMs that better resonate with diverse human values and intentions.
Future Directions
The introduction of Dove paves the way for further research into preference acquisition and model alignment. Future investigations could delve deep into optimizing the selection of instruction-response pairs for joint preference acquisition, aiming to fine-tune the balance between preference data richness and alignment efficacy. Moreover, exploring the integration of Dove with existing and upcoming model architectures to bolster LLMs' alignment with human values across a wider range of domains remains a promising avenue for continued exploration.
In conclusion, by elucidating the limitations of existing preference acquisition protocols and presenting a robust framework for leveraging joint preferences over instruction and response pairs, this work takes a significant step towards aligning LLMs more closely with intricate dimensions of human preferences. Dove not only demonstrates the potential for enhanced LLM performance across varied tasks through a novel optimization objective but also invites a reimagining of preference acquisition methodologies, opening new frontiers in the alignment of AI systems with human values.