Belt and Braces: When Federated Learning Meets Differential Privacy (2404.18814v2)
Abstract: Federated learning (FL) has great potential for large-scale ML without exposing raw data.Differential privacy (DP) is the de facto standard of privacy protection with provable guarantees.Advances in ML suggest that DP would be a perfect fit for FL with comprehensive privacy preservation. Hence, extensive efforts have been devoted to achieving practically usable FL with DP, which however is still challenging.Practitioners often not only are not fully aware of its development and categorization, but also face a hard choice between privacy and utility. Therefore, it calls for a holistic review of current advances and an investigation on the challenges and opportunities for highly usable FL systems with a DP guarantee. In this article, we first introduce the primary concepts of FL and DP, and highlight the benefits of integration. We then review the current developments by categorizing different paradigms and notions. Aiming at usable FL with DP, we present the optimization principles to seek a better tradeoff between model utility and privacy loss. Finally, we discuss future challenges in the emergent areas and relevant research topics.
- Deep learning with differential privacy. In Proc. of ACM CCS, pages 308–318, 2016.
- The skellam mechanism for differentially private federated learning. In Proc. of NeurIPS, 2021.
- cpsgd: Communication-efficient and differentially-private distributed sgd. In Proc. of NeurIPs, pages 7564–7575, 2018.
- Differentially private learning with adaptive clipping. arXiv:1905.03871, 2019.
- How to backdoor federated learning. In Proc. of AISTATS, pages 2938–2948, 2020.
- Connecting robust shuffle privacy and pan-privacy. In Proc. of ACM-SIAM SODA, pages 2384–2403, 2021.
- Privacy amplification via random check-ins. Proc. of NeurIPS, 33, 2020.
- Shrinkwrap: efficient sql query processing in differentially private data federations. Proc. VLDB Endow., 12(3):307–320, 2018.
- Practical secure aggregation for privacy-preserving machine learning. In Proc. of ACM CCS, pages 1175–1191, 2017.
- K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In Proc. of NeurIPS, pages 289–296, 2009.
- When machine unlearning jeopardizes privacy. In Proc. of ACM CCS, pages 896–911, 2021.
- Understanding gradient clipping in private sgd: a geometric perspective. Proc. of NeurIPS, 33, 2020.
- Federated learning for privacy-preserving ai. Communications of the ACM, 63(12):33–36, 2020.
- Distributed differential privacy via shuffling. In Proc. of Eurocrypt, pages 375–403, 2019.
- Privacy aware learning. J. ACM, 61(6):1–57, 2014.
- C. Dwork. A firm foundation for private data analysis. Comm. ACM, 54(1):86–95, 2011.
- The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
- Encode, shuffle, analyze privacy revisited: Formalizations and empirical evaluation. arXiv:2001.03618, 2020.
- F. Farokhi. Distributionally-robust machine learning using locally differentially-private data. arXiv:2006.13488, 2020.
- Artificial intelligence across company borders. Communications of the ACM, 65(1):34–36, 2021.
- Model inversion attacks that exploit confidence information and basic countermeasures. In Proc. of ACM CCS, pages 1322–1333, 2015.
- Differentially private federated learning: A client level perspective. In Proc. of NeurIPs, 2017.
- On the rényi differential privacy of the shuffle model. In Proc. of ACM CCS, page 2321–2341, 2021.
- Deep models under the gan: information leakage from collaborative deep learning. In Proc. of ACM CCS, pages 603–618, 2017.
- S. L. Hyland and S. Tople. On the intrinsic privacy of stochastic gradient descent. arXiv:1912.02919, 2019.
- Differential privacy and machine learning: a survey and review. arXiv:1412.7584, 2014.
- The distributed discrete gaussian mechanism for federated learning with secure aggregation. arXiv:2102.06387, 2021.
- Practical and private (deep) learning without sampling or shuffling. arXiv:2103.00039, 2021.
- Advances and open problems in federated learning. arXiv:1912.04977, 2019.
- Federated learning: Strategies for improving communication efficiency. CoRR, abs/1610.05492, 2016.
- Ditto: Fair and robust federated learning through personalization. In Proc. of ICML, pages 6357–6368, 2021.
- Privacy for free: Communication-efficient learning with differential privacy using sketches. arXiv:1911.00972, 2019.
- Federaser: Enabling efficient client-level data removal from federated learning models. In Proc. of IEEE IWQOS, pages 1–10, 2021.
- Flame: Differentially private federated learning in the shuffle model. In Proc. of AAAI, number 10, pages 8688–8696, 2021.
- Fedsel: Federated sgd under local differential privacy with top-k dimension selection. In Proc. of DASFAA, pages 485–501, 2020.
- Communication-efficient learning of deep networks from decentralized data. In Proc. of AISTAS, pages 1273–1282, 2017.
- Learning differentially private recurrent language models. In Proc. of ICLR, pages 1–10.
- Exploiting unintended feature leakage in collaborative learning. In Proc. of IEEE S&P, pages 691–706, 2019.
- Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proc. of IEEE S&P, pages 739–753. IEEE, 2019.
- Adaclip: Adaptive clipping for private sgd. arXiv:1908.07643, 2019.
- M. Rigaki and S. Garcia. A survey of privacy attacks in machine learning. arXiv:2007.07646, 2020.
- Federated learning and differential privacy: Software tools analysis, the sherpa. ai fl framework and methodological guidelines for preserving data privacy. Inf. Fusion, 64:270–292, 2020.
- Cryptϵitalic-ϵ\epsilonitalic_ϵ: Crypto-assisted differential privacy on untrusted servers. In Proc. of ACM SIGMOD, pages 603–619, 2020.
- R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In Proc. of ACM CCS, pages 1310–1321, 2015.
- Membership inference attacks against machine learning models. In Proc. of IEEE S&P, pages 3–18, 2017.
- L. Sun and L. Lyu. Federated model distillation with noise-free differential privacy. arXiv:2009.05537, 2020.
- Ldp-fl: Practical private aggregation in federated learning with local differential privacy. In Proc. of IJCAI, 2021.
- Dp-cryptography: marrying differential privacy and cryptography in emerging applications. Comm. ACM, 64(2):84–93, 2021.
- Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proc. of IEEE INFOCOM, pages 2512–2520, 2019.
- Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Security, 15:3454–3469.
- A (dp)^ 2sgd: Asynchronous decentralized parallel stochastic gradient descent with differential privacy. IEEE Trans. Pattern Anal. Mach. Intell., 2021.
- Functional mechanism: regression analysis under differential privacy. Proc. VLDB Endow., 5(11):1364–1375, 2012.
- Deep leakage from gradients. In Proc. of NeurIPS, pages 14774–14784, 2019.
- Y. Zhu and Y.-X. Wang. Poission subsampled rényi differential privacy. In International Conference on Machine Learning, pages 7634–7642. PMLR, 2019.
Collections
Sign up for free to add this paper to one or more collections.