Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher (2205.08096v2)

Published 17 May 2022 in cs.LG

Abstract: Machine unlearning has become an important area of research due to an increasing need for ML applications to comply with the emerging data privacy regulations. It facilitates the provision for removal of certain set or class of data from an already trained ML model without requiring retraining from scratch. Recently, several efforts have been put in to make unlearning to be effective and efficient. We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness. The knowledge from the competent and incompetent teachers is selectively transferred to the student to obtain a model that doesn't contain any information about the forget data. We experimentally show that this method generalizes well, is fast and effective. Furthermore, we introduce the zero retrain forgetting (ZRF) metric to evaluate any unlearning method. Unlike the existing unlearning metrics, the ZRF score does not depend on the availability of the expensive retrained model. This makes it useful for analysis of the unlearned model after deployment as well. We present results of experiments conducted for random subset forgetting and class forgetting on various deep networks and across different application domains.~Source code is at: https://github.com/vikram2000b/bad-teaching-unlearning

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vikram S Chundawat (7 papers)
  2. Ayush K Tarun (6 papers)
  3. Murari Mandal (34 papers)
  4. Mohan Kankanhalli (117 papers)
Citations (97)