Papers
Topics
Authors
Recent
2000 character limit reached

Scissorhands: Scrub Data Influence via Connection Sensitivity in Networks (2401.06187v3)

Published 11 Jan 2024 in cs.LG and cs.CV

Abstract: Machine unlearning has become a pivotal task to erase the influence of data from a trained model. It adheres to recent data regulation standards and enhances the privacy and security of machine learning applications. In this work, we present a new machine unlearning approach Scissorhands. Initially, Scissorhands identifies the most pertinent parameters in the given model relative to the forgetting data via connection sensitivity. By reinitializing the most influential top-k percent of these parameters, a trimmed model for erasing the influence of the forgetting data is obtained. Subsequently, Scissorhands fine-tunes the trimmed model with a gradient projection-based approach, seeking parameters that preserve information on the remaining data while discarding information related to the forgetting data. Our experimental results, conducted across image classification and image generation tasks, demonstrate that Scissorhands, showcases competitive performance when compared to existing methods. Source code is available at https://github.com/JingWu321/Scissorhands.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Machine unlearning: Linear filtration for logit-based classifiers. Machine Learning, 111(9):3203–3226, 2022.
  2. Evaluating machine unlearning via epistemic uncertainty. arXiv preprint arXiv:2208.10836, 2022.
  3. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159, 2021.
  4. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy (SP), pages 463–480, 2015.
  5. Graph unlearning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 499–513, 2022.
  6. Gnndelete: A general strategy for unlearning in graph neural networks. In International Conference on Learning Representations (ICLR), 2023.
  7. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023.
  8. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  9. Residuals and influence in regression, 1982.
  10. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2020.
  11. Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation. arXiv preprint arXiv:2310.12508, 2023.
  12. Erasing concepts from diffusion models. In 2023 IEEE International Conference on Computer Vision (ICCV), 2023a.
  13. Unified concept editing in diffusion models. arXiv preprint arXiv:2308.14761, 2023b.
  14. Making ai forget you: Data deletion in machine learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
  15. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9301–9309, 2020a.
  16. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In European Conference on Computer Vision (ECCV), pages 383–398. Springer, 2020b.
  17. Mixed-privacy forgetting in deep networks. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 792–801, 2021.
  18. Eric Goldman. An introduction to the california consumer privacy act (ccpa). Santa Clara Univ. Legal Studies Research Paper, 2020.
  19. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11516–11524, 2021.
  20. Certified data removal from machine learning models. In Proceedings of the 37th International Conference on Machine Learning, pages 3832–3842. PMLR, 2020.
  21. Facet: Fairness in computer vision evaluation benchmark. In 2023 IEEE International Conference on Computer Vision (ICCV), pages 20370–20382, 2023.
  22. Federated unlearning: How to efficiently erase a client in fl? In International conference on machine learning. PMLR, 2022.
  23. Learning both weights and connections for efficient neural network. 2015.
  24. Deep residual learning for image recognition. In 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
  25. Selective amnesia: A continual learning approach to forgetting in deep generative models. In Advances in Neural Information Processing Systems (NeurIPS), 2023.
  26. Approximate data deletion from machine learning models. In International Conference on Artificial Intelligence and Statistics, pages 2008–2016. PMLR, 2021.
  27. Model sparsification can simplify machine unlearning. In Advances in Neural Information Processing Systems (NeurIPS), 2023.
  28. Leonid V Kantorovich. Mathematical methods of organizing and planning production. Management science, 6(4):366–422, 1960.
  29. Multiple incremental decremental learning of support vector machines. IEEE Transactions on Neural Networks, 21(7):1048–1059, 2010.
  30. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.
  31. Similarity of neural network representations revisited. In International conference on machine learning, pages 3519–3529. PMLR, 2019.
  32. Learning multiple layers of features from tiny images. 2009.
  33. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2012.
  34. Snip: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations (ICLR), 2019.
  35. A signal propagation perspective for pruning neural networks at initialization. In International Conference on Learning Representations (ICLR), 2020.
  36. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10, 2021.
  37. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, pages 1749–1758, 2022.
  38. Deep unlearning via randomized conditionally independent hessians. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10412–10421, 2022.
  39. Descent-to-delete: Gradient-based methods for machine unlearning. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, pages 931–962. PMLR, 2021.
  40. Reading digits in natural images with unsupervised feature learning. 2011.
  41. A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022.
  42. Ssse: Efficiently erasing samples from trained machine learning models. arXiv preprint arXiv:2107.03860, 2021.
  43. On wasserstein two-sample testing and related families of nonparametric tests. Entropy, 19(2):47, 2017.
  44. Incremental and decremental learning for linear support vector machines. In International Conference on Artificial Neural Networks, pages 209–218. Springer, 2007.
  45. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems (NeurIPS), 34:18075–18086, 2021.
  46. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 618–626, 2017.
  47. Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
  48. Athena: Probabilistic Verification of Machine Unlearning. Proceedings on Privacy Enhancing Technologies, 2022.
  49. Deep regression unlearning. In International Conference on Machine Learning, pages 33921–33939. PMLR, 2023a.
  50. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023b.
  51. Unrolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), pages 303–319. IEEE, 2022.
  52. Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):10–5555, 2017.
  53. Federated unlearning via class-discriminative pruning. In Proceedings of the ACM Web Conference 2022, pages 622–632, 2022.
  54. Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577, 2021.
  55. Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441, 2022.
  56. DeltaGrad: Rapid retraining of machine learning models. In Proceedings of the 37th International Conference on Machine Learning, pages 10355–10366. PMLR, 2020.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.