Information Leakage from Data Updates in Machine Learning Models (2309.11022v1)
Abstract: In this paper we consider the setting where machine learning models are retrained on updated datasets in order to incorporate the most up-to-date information or reflect distribution shifts. We investigate whether one can infer information about these updates in the training data (e.g., changes to attribute values of records). Here, the adversary has access to snapshots of the machine learning model before and after the change in the dataset occurs. Contrary to the existing literature, we assume that an attribute of a single or multiple training data points are changed rather than entire data records are removed or added. We propose attacks based on the difference in the prediction confidence of the original model and the updated model. We evaluate our attack methods on two public datasets along with multi-layer perceptron and logistic regression models. We validate that two snapshots of the model can result in higher information leakage in comparison to having access to only the updated model. Moreover, we observe that data records with rare values are more vulnerable to attacks, which points to the disparate vulnerability of privacy attacks in the update setting. When multiple records with the same original attribute value are updated to the same new value (i.e., repeated changes), the attacker is more likely to correctly guess the updated values since repeated changes leave a larger footprint on the trained model. These observations point to vulnerability of machine learning models to attribute inference attacks in the update setting.
- Yinzhi Cao and Junfeng Yang. 2015. Towards Making Systems Forget with Machine Unlearning. In 2015 IEEE Symposium on Security and Privacy. IEEE, San Jose, CA, USA, 463–480. https://doi.org/10.1109/SP.2015.35
- Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA, 1897–1914.
- When Machine Unlearning Jeopardizes Privacy. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, New York, NY, USA, 896–911. https://doi.org/10.1145/3460120.3484756
- LendingClub. https://github.com/elsalmi/LendingClub.
- Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3. Springer, New York, NY, USA, 265–284.
- European Commission. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). https://eur-lex.europa.eu/eli/reg/2016/679/oj
- Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, New York, NY, USA, 1322–1333. https://doi.org/10.1145/2810103.2813677
- Stephanie L Hyland and Shruti Tople. 2019. An empirical study on the intrinsic privacy of SGD.
- How to combine membership-inference attacks on multiple updated models. In Proceedings on Privacy Enhancing Technologies. De Gruyter Open, Lausanne, Switzerland, 211–232.
- Bargav Jayaraman and David Evans. 2022. Are Attribute Inference Attacks Just Imputation?. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, New York, NY, USA, 1569–1582.
- Revisiting membership inference under realistic assumptions. In Proceedings on Privacy Enhancing Technologies. De Gruyter Open, Berlin, Germany, 348–368.
- Disparate vulnerability to membership inference attacks. In Proceedings on Privacy Enhancing Technologies. De Gruyter Open, Berlin, Germany, 460–480.
- Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, USA, 1291–1308. https://www.usenix.org/conference/usenixsecurity20/presentation/salem
- Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, San Jose, CA, USA, 3–18.
- Analyzing Information Leakage of Updates to Natural Language Models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (Virtual Event, USA) (CCS ’20). Association for Computing Machinery, New York, NY, USA, 363–375. https://doi.org/10.1145/3372297.3417880