Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Label Smoothing Improves Machine Unlearning (2406.07698v1)

Published 11 Jun 2024 in cs.LG

Abstract: The objective of machine unlearning (MU) is to eliminate previously learned data from a model. However, it is challenging to strike a balance between computation cost and performance when using existing MU techniques. Taking inspiration from the influence of label smoothing on model confidence and differential privacy, we propose a simple gradient-based MU approach that uses an inverse process of label smoothing. This work introduces UGradSL, a simple, plug-and-play MU approach that uses smoothed labels. We provide theoretical analyses demonstrating why properly introducing label smoothing improves MU performance. We conducted extensive experiments on six datasets of various sizes and different modalities, demonstrating the effectiveness and robustness of our proposed method. The consistent improvement in MU performance is only at a marginal cost of additional computations. For instance, UGradSL improves over the gradient ascent MU baseline by 66% unlearning accuracy without sacrificing unlearning efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zonglin Di (9 papers)
  2. Zhaowei Zhu (29 papers)
  3. Jinghan Jia (30 papers)
  4. Jiancheng Liu (19 papers)
  5. Zafar Takhirov (4 papers)
  6. Bo Jiang (235 papers)
  7. Yuanshun Yao (28 papers)
  8. Sijia Liu (204 papers)
  9. Yang Liu (2253 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.