Teaching Language Models to Evolve with Users: Dynamic Profile Modeling for Personalized Alignment (2505.15456v1)
Abstract: Personalized alignment is essential for enabling LLMs to engage effectively in user-centric dialogue. While recent prompt-based and offline optimization methods offer preliminary solutions, they fall short in cold-start scenarios and long-term personalization due to their inherently static and shallow designs. In this work, we introduce the Reinforcement Learning for Personalized Alignment (RLPA) framework, in which an LLM interacts with a simulated user model to iteratively infer and refine user profiles through dialogue. The training process is guided by a dual-level reward structure: the Profile Reward encourages accurate construction of user representations, while the Response Reward incentivizes generation of responses consistent with the inferred profile. We instantiate RLPA by fine-tuning Qwen-2.5-3B-Instruct, resulting in Qwen-RLPA, which achieves state-of-the-art performance in personalized dialogue. Empirical evaluations demonstrate that Qwen-RLPA consistently outperforms prompting and offline fine-tuning baselines, and even surpasses advanced commercial models such as Claude-3.5 and GPT-4o. Further analysis highlights Qwen-RLPA's robustness in reconciling conflicting user preferences, sustaining long-term personalization and delivering more efficient inference compared to recent reasoning-focused LLMs. These results emphasize the potential of dynamic profile inference as a more effective paradigm for building personalized dialogue systems.
- Weixiang Zhao (21 papers)
- Xingyu Sui (9 papers)
- Yulin Hu (37 papers)
- Jiahe Guo (12 papers)
- Haixiao Liu (1 paper)
- Biye Li (6 papers)
- Yanyan Zhao (39 papers)
- Bing Qin (186 papers)
- Ting Liu (329 papers)