Mimicking User Data: On Mitigating Fine-Tuning Risks in Closed Large Language Models (2406.10288v2)
Abstract: Fine-tuning LLMs on small, high-quality datasets can enhance their performance on specific downstream tasks. Recent research shows that fine-tuning on benign, instruction-following data can inadvertently undo the safety alignment process and increase a model's propensity to comply with harmful queries. Although critical, understanding and mitigating safety risks in well-defined tasks remains distinct from the instruction-following context due to structural differences in the data. Our work addresses the gap in our understanding of these risks across diverse types of data in closed models - where providers control how user data is utilized in the fine-tuning process. We demonstrate how malicious actors can subtly manipulate the structure of almost any task-specific dataset to foster significantly more dangerous model behaviors, while maintaining an appearance of innocuity and reasonable downstream task performance. To address this issue, we propose a novel mitigation strategy that mixes in safety data which mimics the task format and prompting style of the user data, showing this is more effective than existing baselines at re-establishing safety alignment while maintaining similar task performance.
- Francisco Eiras (17 papers)
- Aleksandar Petrov (21 papers)
- Phillip H. S. Torr (3 papers)
- M. Pawan Kumar (48 papers)
- Adel Bibi (53 papers)