Dissecting Fine-Tuning Unlearning in Large Language Models (2410.06606v2)
Abstract: Fine-tuning-based unlearning methods prevail for preventing targeted harmful, sensitive, or copyrighted information within LLMs while preserving overall capabilities. However, the true effectiveness of these methods is unclear. In this work, we delve into the limitations of fine-tuning-based unlearning through activation patching and parameter restoration experiments. Our findings reveal that these methods alter the model's knowledge retrieval process, providing further evidence that they do not genuinely erase the problematic knowledge embedded in the model parameters. Instead, the coefficients generated by the MLP components in the model's final layer are the primary contributors to these seemingly positive unlearning effects, playing a crucial role in controlling the model's behaviors. Furthermore, behavioral tests demonstrate that this unlearning mechanism inevitably impacts the global behavior of the models, affecting unrelated knowledge or capabilities. The code is released at https://github.com/yihuaihong/Dissecting-FT-Unlearning.
- Yihuai Hong (6 papers)
- Yuelin Zou (5 papers)
- Lijie Hu (50 papers)
- Ziqian Zeng (32 papers)
- Di Wang (407 papers)
- Haiqin Yang (32 papers)