Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BAPO: Base-Anchored Preference Optimization for Overcoming Forgetting in Large Language Models Personalization (2407.00693v2)

Published 30 Jun 2024 in cs.AI, cs.CL, and cs.LG

Abstract: While learning to align LLMs with human preferences has shown remarkable success, aligning these models to meet the diverse user preferences presents further challenges in preserving previous knowledge. This paper examines the impact of personalized preference optimization on LLMs, revealing that the extent of knowledge loss varies significantly with preference heterogeneity. Although previous approaches have utilized the KL constraint between the reference model and the policy model, we observe that they fail to maintain general knowledge and alignment when facing personalized preferences. To this end, we introduce Base-Anchored Preference Optimization (BAPO), a simple yet effective approach that utilizes the initial responses of reference model to mitigate forgetting while accommodating personalized alignment. BAPO effectively adapts to diverse user preferences while minimally affecting global knowledge or general alignment. Our experiments demonstrate the efficacy of BAPO in various setups.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Gihun Lee (12 papers)
  2. Minchan Jeong (11 papers)
  3. Yujin Kim (22 papers)
  4. Hojung Jung (3 papers)
  5. Jaehoon Oh (18 papers)
  6. Sangmook Kim (8 papers)
  7. Se-Young Yun (114 papers)