Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learn to Forget: Machine Unlearning via Neuron Masking (2003.10933v3)

Published 24 Mar 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Nowadays, machine learning models, especially neural networks, become prevalent in many real-world applications.These models are trained based on a one-way trip from user data: as long as users contribute their data, there is no way to withdraw; and it is well-known that a neural network memorizes its training data. This contradicts the "right to be forgotten" clause of GDPR, potentially leading to law violations. To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model.In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method. It is based on the concept of membership inference and describes the transformation rate of the eliminated data from "memorized" to "unknown" after conducting unlearning. We also propose a novel unlearning method calledForsaken. It is superior to previous work in either utility or efficiency (when achieving the same forgetting rate). We benchmark Forsaken with eight standard datasets to evaluate its performance. The experimental results show that it can achieve more than 90\% forgetting rate on average and only causeless than 5\% accuracy loss.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yang Liu (2253 papers)
  2. Zhuo Ma (9 papers)
  3. Ximeng Liu (45 papers)
  4. Jian Liu (404 papers)
  5. Zhongyuan Jiang (4 papers)
  6. Jianfeng Ma (34 papers)
  7. Philip Yu (22 papers)
  8. Kui Ren (169 papers)
Citations (53)

Summary

We haven't generated a summary for this paper yet.