Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective (2405.10096v1)

Published 16 May 2024 in cs.LG, cs.CR, and cs.DC

Abstract: Federated Learning (FL) is an emerging paradigm that holds great promise for privacy-preserving machine learning using distributed data. To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights. However, FL faces a significant challenge in terms of large communication overhead when transmitting these model weights. To address this issue, quantization is commonly employed. Nevertheless, the presence of quantized Gaussian noise introduces complexities in understanding privacy protection. This research paper investigates the impact of quantization on privacy in FL systems. We examine the privacy guarantees of quantized Gaussian mechanisms using R\'enyi Differential Privacy (RDP). By deriving the privacy budget of quantized Gaussian mechanisms, we demonstrate that lower quantization bit levels provide improved privacy protection. To validate our theoretical findings, we employ Membership Inference Attacks (MIA), which gauge the accuracy of privacy leakage. The numerical results align with our theoretical analysis, confirming that quantization can indeed enhance privacy protection. This study not only enhances our understanding of the correlation between privacy and communication in FL but also underscores the advantages of quantization in preserving privacy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc. Int. Conf. Artif. Intell. Statist., 4 2017.
  2. H. Ye, L. Liang, and G. Y. Li, “Decentralized federated learning with unreliable communications,” IEEE J. Sel. Top. Signal Process., vol. 16, no. 3, pp. 487–500, 2022.
  3. A. Hatamizadeh, H. Yin, H. R. Roth, W. Li, J. Kautz, D. Xu, and P. Molchanov, “Gradvit: Gradient inversion of vision transformers,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 6 2022.
  4. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Priv., 2017, pp. 3–18.
  5. H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private recurrent language models,” in Int. Conf. Learn. Represent., 2018.
  6. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. S. Quek, and H. Vincent Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 3454–3469, 2020.
  7. M. Noble, A. Bellet, and A. Dieuleveut, “Differentially private federated learning on heterogeneous data,” in Proc. Int. Conf. Artif. Intell. Statist., vol. 151, 28–30 Mar 2022.
  8. A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization,” Proc. Int. Conf. Artif. Intell. Statist., vol. 108, 2020.
  9. L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Hierarchical federated learning with quantization: Convergence analysis and system design,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 2–18, 2023.
  10. N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, and H. B. McMahan, “cpsgd: communication-efficient and differentially-private distributed sgd,” in Proc. Adv. Neural Inf. Process. Syst., 2018, p. 7575–7586.
  11. C. L. Canonne, G. Kamath, and T. Steinke, “The discrete gaussian for differential privacy,” in Proc. Adv. Neural Inf. Process. Syst., vol. 33, 2020, pp. 15 676–15 688.
  12. L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Binary federated learning with client-level differential privacy,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM), Kuala Lumpur, Malaysia, 12 2023.
  13. M. Kim, O. Günlü, and R. F. Schaefer, “Effects of quantization on federated learning with local differential privacy,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM), Rio de Janeiro, Brazil, 12 2022.
  14. L. Wang, R. Jia, and D. Song, “D2p-fed: Differentially private federated learning with efficient communication,” 2021.
  15. C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Found. Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  16. A. T. Suresh, F. X. Yu, S. Kumar, and H. B. McMahan, “Distributed mean estimation with limited communication,” in Proc. Int. Conf. Mach. Learn., vol. 70, 8 2017, pp. 3329–3337.
  17. I. Mironov, “Rényi differential privacy,” in Proc. IEEE Comput. Secur. Found. Symp., 2017, pp. 263–275.
  18. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in Proc. IEEE Symp. Secur. Priv.   IEEE, 2017, pp. 3–18.
  19. N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer, “Membership inference attacks from first principles,” in Proc. IEEE Symp. Secur. Priv.   IEEE, 2022, pp. 1897–1914.
  20. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Int. Conf. Learn. Represent. San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tianqu Kang (3 papers)
  2. Lumin Liu (6 papers)
  3. Hengtao He (43 papers)
  4. Jun Zhang (1008 papers)
  5. S. H. Song (32 papers)
  6. Khaled B. Letaief (209 papers)
Citations (2)