Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Random Relabeling for Efficient Machine Unlearning (2305.12320v1)

Published 21 May 2023 in cs.LG and cs.CR

Abstract: Learning algorithms and data are the driving forces for machine learning to bring about tremendous transformation of industrial intelligence. However, individuals' right to retract their personal data and relevant data privacy regulations pose great challenges to machine learning: how to design an efficient mechanism to support certified data removals. Removal of previously seen data known as machine unlearning is challenging as these data points were implicitly memorized in training process of learning algorithms. Retraining remaining data from scratch straightforwardly serves such deletion requests, however, this naive method is not often computationally feasible. We propose the unlearning scheme random relabeling, which is applicable to generic supervised learning algorithms, to efficiently deal with sequential data removal requests in the online setting. A less constraining removal certification method based on probability distribution similarity with naive unlearning is further developed for logit-based classifiers.

Citations (3)

Summary

We haven't generated a summary for this paper yet.