Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy (2404.15686v2)
Abstract: The concept of differential privacy (DP) can quantitatively measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset. The DP, which is generally used as a constraint, has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google. A common methodology for guaranteeing DP is incorporating appropriate noise into query outputs, thereby establishing statistical defense systems against privacy attacks such as membership inference and linkage attacks. However, especially for small datasets, existing DP mechanisms occasionally add excessive amount of noise to query output, thereby discarding data utility. This is because the traditional DP computes privacy loss based on the worst-case scenario, i.e., statistical outliers. In this work, to tackle this challenge, we utilize per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances. In a nutshell, we propose a per-instance noise variance optimization (NVO) game, framed as a common interest sequential game, and show that the Nash equilibrium (NE) points of it inherently guarantee pDP for all data instances. Through extensive experiments, our proposed pDP algorithm demonstrated an average performance improvement of up to 99.53% compared to the conventional DP algorithm in terms of KL divergence.
- C. Dwork, “Differential privacy,” in Automata, Languages and Programming, pp. 1–12, Springer Berlin Heidelberg, 2006.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18, IEEE Computer Society, may 2017.
- A. Narayanan and V. Shmatikov, “How to break anonymity of the netflix prize dataset,” 2007.
- J.-W. Lee, H. Kang, Y. Lee, W. Choi, J. Eom, M. Deryabin, E. Lee, J. Lee, D. Yoo, Y.-S. Kim, and J.-S. No, “Privacy-preserving machine learning with fully homomorphic encryption for deep neural network,” IEEE Access, vol. 10, pp. 30039–30054, 2022.
- X. Wu, F. Huang, Z. Hu, and H. Huang, “Faster adaptive federated learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 10379–10387, Jun. 2023.
- J. Tang, A. Korolova, X. Bai, X. Wang, and X. Wang, “Privacy loss in apple’s implementation of differential privacy on macos 10.12,” 2017.
- U. Erlingsson, V. Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, p. 1054–1067, Association for Computing Machinery, 2014.
- A. Blanco-Justicia, D. Sánchez, J. Domingo-Ferrer, and K. Muralidhar, “A critical review on the use (and misuse) of differential privacy in machine learning,” vol. 55, dec 2022.
- B. Chen and M. Hale, “The bounded gaussian mechanism for differential privacy,” Journal of Privacy and Confidentiality, vol. 14, Feb. 2024.
- Y.-X. Wang, “Per-instance differential privacy,” Journal of Privacy and Confidentiality, vol. 9, Mar. 2019.
- Q. Geng and P. Viswanath, “The optimal mechanism in differential privacy,” in 2014 IEEE International Symposium on Information Theory, pp. 2371–2375, 2014.
- Q. Geng, P. Kairouz, S. Oh, and P. Viswanath, “The staircase mechanism in differential privacy,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 7, pp. 1176–1184, 2015.
- Q. Geng, W. Ding, R. Guo, and S. Kumar, “Optimal noise-adding mechanism in additive differential privacy,” in Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, vol. 89 of Proceedings of Machine Learning Research, pp. 11–20, PMLR, 16–18 Apr 2019.
- N. Holohan, S. Antonatos, S. Braghin, and P. Mac Aonghusa, “The bounded laplace mechanism in differential privacy,” Journal of Privacy and Confidentiality, vol. 10, Dec. 2019.
- I. Mironov, “Rényi differential privacy,” in 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263–275, 2017.
- J. Geumlek, S. Song, and K. Chaudhuri, “Renyi differential privacy mechanisms for posterior sampling,” in Advances in Neural Information Processing Systems, vol. 30, Curran Associates, Inc., 2017.
- A. M. Girgis, D. Data, S. Diggavi, A. T. Suresh, and P. Kairouz, “On the rényi differential privacy of the shuffle model,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS ’21, p. 2321–2341, Association for Computing Machinery, 2021.
- J. T. Wang, S. Mahloujifar, S. Wang, R. Jia, and P. Mittal, “Renyi differential privacy of propose-test-release and applications to private and robust machine learning,” in Advances in Neural Information Processing Systems, vol. 35, pp. 38719–38732, Curran Associates, Inc., 2022.
- Y. Zhu and Y.-X. Wang, “Improving sparse vector technique with renyi differential privacy,” in Advances in Neural Information Processing Systems, vol. 33, pp. 20249–20258, Curran Associates, Inc., 2020.
- C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS ’16, p. 308–318, Association for Computing Machinery, 2016.
- J. Ding, G. Liang, J. Bi, and M. Pan, “Differentially private and communication efficient collaborative learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7219–7227, May 2021.
- M. Moreau and T. Benkhelif, “Dpsgd strategies for cross-silo federated learning,” in 2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI), pp. 1–5, 2021.
- S. Truex, L. Liu, K.-H. Chow, M. E. Gursoy, and W. Wei, “Ldp-fed: Federated learning with local differential privacy,” in Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, EdgeSys ’20, p. 61–66, Association for Computing Machinery, 2020.
- J. V. Neumann and O. Morgenstern, Theory of Games and Economic Behavior. Princeton, NJ, USA: Princeton University Press, 1944.
- P. D. Taylor and L. B. Jonker, “Evolutionary stable strategies and game dynamics,” Mathematical Biosciences, vol. 40, no. 1, pp. 145–156, 1978.
- F. Zaman, S. M. Elsayed, T. Ray, and R. A. Sarkerr, “Evolutionary algorithms for finding nash equilibria in electricity markets,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 4, pp. 536–549, 2018.
- J. Nash, “Non-cooperative games,” Annals of Mathematics, vol. 54, no. 2, pp. 286–295, 1951.
- V. Boucher, “Selecting equilibria using best-response dynamics,” in CRREP Working Paper Series 2017-09, 2017.
- J. Cirtautas, “Nba players.” https://www.kaggle.com/datasets/justinas/nba-players-data, 2023. Accessed: 2024-04-20.
- M. Fatakdawala, “Income dataset.” https://www.kaggle.com/datasets/mastmustu/income, 2019. Accessed: 2024-04-20.
- Yashkmd, “Credit profile (two-wheeler loan) dataset.” https://www.kaggle.com/datasets/yashkmd/credit-profile-two-wheeler-loan-dataset/, 2023. Accessed: 2024-04-20.
- G. W. Furnas, S. Deerwester, S. T. Dumais, T. K. Landauer, R. A. Harshman, L. A. Streeter, and K. E. Lochbaum, “Information retrieval using a singular value decomposition model of latent semantic structure,” in Proceedings of the 11th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, p. 465–480, Association for Computing Machinery, 1988.
- A. F. Gad, “Pygad: An intuitive genetic algorithm python library,” 2021.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.