Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Quantization is All You Need for Differential Privacy in Federated Learning (2306.11913v1)

Published 20 Jun 2023 in cs.LG, cs.CR, and cs.DC

Abstract: Federated learning (FL) is a common and practical framework for learning a machine model in a decentralized fashion. A primary motivation behind this decentralized approach is data privacy, ensuring that the learner never sees the data of each local source itself. Federated learning then comes with two majors challenges: one is handling potentially complex model updates between a server and a large number of data sources; the other is that de-centralization may, in fact, be insufficient for privacy, as the local updates themselves can reveal information about the sources' data. To address these issues, we consider an approach to federated learning that combines quantization and differential privacy. Absent privacy, Federated Learning often relies on quantization to reduce communication complexity. We build upon this approach and develop a new algorithm called the \textbf{R}andomized \textbf{Q}uantization \textbf{M}echanism (RQM), which obtains privacy through a two-levels of randomization. More precisely, we randomly sub-sample feasible quantization levels, then employ a randomized rounding procedure using these sub-sampled discrete levels. We are able to establish that our results preserve ``Renyi differential privacy'' (Renyi DP). We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs for DP federated learning. To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
  2. The skellam mechanism for differentially private federated learning. Advances in Neural Information Processing Systems, 34:5052–5064.
  3. cpsgd: Communication-efficient and differentially-private distributed sgd. Advances in Neural Information Processing Systems, 31.
  4. Qsgd: Communication-efficient sgd via gradient quantization and encoding. Advances in neural information processing systems, 30.
  5. Secure single-server aggregation with (poly) logarithmic overhead. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pages 1253–1269.
  6. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191.
  7. Privacy-aware compression for federated data analysis. In Uncertainty in Artificial Intelligence, pages 296–306. PMLR.
  8. The poisson binomial mechanism for unbiased federated learning with secure aggregation. In International Conference on Machine Learning, pages 3490–3506. PMLR.
  9. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pages 2921–2926. IEEE.
  10. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT 2006: 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, May 28-June 1, 2006. Proceedings 25, pages 486–503. Springer.
  11. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407.
  12. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pages 1054–1067.
  13. vqsgd: Vector quantized stochastic gradient descent. In International Conference on Artificial Intelligence and Statistics, pages 2197–2205. PMLR.
  14. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
  15. Federated learning with compression: Unified analysis and sharp guarantees. In International Conference on Artificial Intelligence and Statistics, pages 2350–2358. PMLR.
  16. The distributed discrete gaussian mechanism for federated learning with secure aggregation. In International Conference on Machine Learning, pages 5201–5212. PMLR.
  17. Learning with user-level privacy. Advances in Neural Information Processing Systems, 34.
  18. Privacy for free: Communication-efficient learning with differential privacy using sketches. arXiv preprint arXiv:1911.00972.
  19. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR.
  20. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963.
  21. Mironov, I. (2017). Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF), pages 263–275. IEEE.
  22. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In International Conference on Artificial Intelligence and Statistics, pages 2021–2031. PMLR.
  23. Rényi, A. (1961). On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, volume 4, pages 547–562. University of California Press.
  24. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60(7):3797–3820.
  25. Accelerated federated optimization with quantization. In Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022).
  26. Deep leakage from gradients. Advances in neural information processing systems, 32.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yeojoon Youn (2 papers)
  2. Zihao Hu (12 papers)
  3. Juba Ziani (36 papers)
  4. Jacob Abernethy (46 papers)
Citations (16)