Sequence-level Large Language Model Training with Contrastive Preference Optimization (2502.16433v1)
Abstract: The next token prediction loss is the dominant self-supervised training objective for LLMs and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the LLM at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.
Collections
Sign up for free to add this paper to one or more collections.