Label Smoothing Improves Machine Unlearning (2406.07698v1)
Abstract: The objective of machine unlearning (MU) is to eliminate previously learned data from a model. However, it is challenging to strike a balance between computation cost and performance when using existing MU techniques. Taking inspiration from the influence of label smoothing on model confidence and differential privacy, we propose a simple gradient-based MU approach that uses an inverse process of label smoothing. This work introduces UGradSL, a simple, plug-and-play MU approach that uses smoothed labels. We provide theoretical analyses demonstrating why properly introducing label smoothing improves MU performance. We conducted extensive experiments on six datasets of various sizes and different modalities, demonstrating the effectiveness and robustness of our proposed method. The consistent improvement in MU performance is only at a marginal cost of additional computations. For instance, UGradSL improves over the gradient ascent MU baseline by 66% unlearning accuracy without sacrificing unlearning efficiency.
- Zonglin Di (9 papers)
- Zhaowei Zhu (29 papers)
- Jinghan Jia (30 papers)
- Jiancheng Liu (19 papers)
- Zafar Takhirov (4 papers)
- Bo Jiang (235 papers)
- Yuanshun Yao (28 papers)
- Sijia Liu (204 papers)
- Yang Liu (2253 papers)