Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Few-Shot Unlearning by Model Inversion (2205.15567v2)

Published 31 May 2022 in cs.LG

Abstract: We consider a practical scenario of machine unlearning to erase a target dataset, which causes unexpected behavior from the trained model. The target dataset is often assumed to be fully identifiable in a standard unlearning scenario. Such a flawless identification, however, is almost impossible if the training dataset is inaccessible at the time of unlearning. Unlike previous approaches requiring a complete set of targets, we consider few-shot unlearning scenario when only a few samples of target data are available. To this end, we formulate the few-shot unlearning problem specifying intentions behind the unlearning request (e.g., purely unlearning, mislabel correction, privacy protection), and we devise a straightforward framework that (i) retrieves a proxy of the training data via model inversion fully exploiting information available in the context of unlearning; (ii) adjusts the proxy according to the unlearning intention; and (iii) updates the model with the adjusted proxy. We demonstrate that our method using only a subset of target data can outperform the state-of-the-art unlearning methods even with a complete indication of target data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Youngsik Yoon (3 papers)
  2. Jinhwan Nam (2 papers)
  3. Hyojeong Yun (1 paper)
  4. Jaeho Lee (51 papers)
  5. Dongwoo Kim (63 papers)
  6. Jungseul Ok (50 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.