Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Privately Tune Hyperparameters in Federated Learning? Insights from a Benchmark Study (2402.16087v2)

Published 25 Feb 2024 in cs.CR

Abstract: In this paper, we address the problem of privacy-preserving hyperparameter (HP) tuning for cross-silo federated learning (FL). We first perform a comprehensive measurement study that benchmarks various HP strategies suitable for FL. Our benchmarks show that the optimal parameters of the FL server, e.g., the learning rate, can be accurately and efficiently tuned based on the HPs found by each client on its local data. We demonstrate that HP averaging is suitable for iid settings, while density-based clustering can uncover the optimal set of parameters in non-iid ones. Then, to prevent information leakage from the exchange of the clients' local HPs, we design and implement PrivTuna, a novel framework for privacy-preserving HP tuning using multiparty homomorphic encryption. We use PrivTuna to implement privacy-preserving federated averaging and density-based clustering, and we experimentally evaluate its performance demonstrating its computation/communication efficiency and its precision in tuning hyperparameters.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (97)
  1. Deep learning with differential privacy. In ACM CCS, 2016.
  2. Genetic cfl: Hyperparameter optimization in clustered federated learning. Computational Intelligence and Neuroscience, 2021:7156420, Nov 2021.
  3. M. Albrecht et al. Homomorphic Encryption Security Standard. Technical report, HomomorphicEncryption.org, 2018.
  4. R. Bardenet and B. Kégl. Surrogating the surrogate: accelerating gaussian-process-based global optimization with a mixture cross-entropy algorithm. 08 2010.
  5. R. E. Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, Princeton, 1961.
  6. Algorithms for hyper-parameter optimization. 12 2011.
  7. J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(10):281–305, 2012.
  8. Practical secure aggregation for federated learning on user-held data. In NIPS PPML Workshop, 2016.
  9. Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–9. IEEE, 2020.
  10. FLASH: Fast and robust framework for privacy-preserving machine learning. PETS, 2020.
  11. Trident: Efficient 4pc framework for privacy preserving machine learning. In Network and Distributed System Security Symposium (NDSS), 2020.
  12. T. Chen and S. Zhong. Privacy-preserving backpropagation neural network learning. IEEE Transactions on Neural Networks, 20(10):1554–1564, Oct 2009.
  13. Distributed training with heterogeneous data: Bridging median- and mean-based algorithms. CoRR, abs/1906.01736, 2019.
  14. Homomorphic encryption for arithmetic of approximate numbers. In ASIACRYPT, 2017.
  15. Efficient homomorphic comparison methods with optimal complexity. In S. Moriai and H. Wang, editors, Advances in Cryptology – ASIACRYPT 2020, pages 221–256, Cham, 2020. Springer International Publishing.
  16. Emnist: Extending mnist to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2921–2926. IEEE, 2017.
  17. H. Corrigan-Gibbs and D. Boneh. Prio: Private, Robust, and Computation of Aggregate Statistics. In USENIX NSDI, 2017.
  18. Federated bayesian optimization via thompson sampling. In NeurIPS, 2020.
  19. TAG: Gradient attack on transformer-based language models. In EMNLP, pages 3600–3610, 2021.
  20. Lamp: Extracting text from gradients with language model priors. CoRR, abs/2202.08827, 2022.
  21. Y. Ding and X. Wu. Revisiting hyperparameter tuning with differential privacy. CoRR, abs/2211.01852, 2022.
  22. Lattigo v5. Online: https://github.com/tuneinsight/lattigo, Nov. 2023. (Accessed: 2023-11-11).
  23. Scalable privacy-preserving distributed learning. PETS, 2021.
  24. Truly privacy-preserving federated analytics for precision medicine with multiparty homomorphic encryption. Nature Communications, 12(1):5910, Oct 2021.
  25. Inverting gradients - how easy is it to break privacy in federated learning? In NeurIPS, 2020.
  26. A unified framework for quantifying privacy risk in synthetic data. Proceedings on Privacy Enhancing Technologies, 2:312–328, 2023.
  27. R. Goldschmidt. Applications of division by convergence. M.sc. dissertation. M.I.T., 1964.
  28. Towards non-i.i.d. and invisible data with fednas: Federated deep learning via neural architecture search. CoRR, abs/2004.08546, 2021.
  29. Privacy-preserving machine learning as a service. PETS, 2018.
  30. Realizing private and practical pharmacological collaboration. Science, 362(6412):347–350, 2018.
  31. Deep models under the GAN: Information leakage from collaborative deep learning. In ACM CCS, 2017.
  32. Distributed learning without distress: Privacy-preserving empirical risk minimization. In NIPS, 2018.
  33. P. Jiang and G. Agrawal. A linear speedup analysis of distributed deep learning with sparse and quantized communication. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, page 2530–2541, Red Hook, NY, USA, 2018. Curran Associates Inc.
  34. Cafe: Catastrophic data leakage in vertical federated learning. In NeurIPS, volume 34, pages 994–1006, 2021.
  35. P. Kairouz et al. Advances and open problems in federated learning. CoRR, arXiv:1912.04977, 2019.
  36. M. Keller and K. Sun. Secure quantized training for deep learning. In International Conference on Machine Learning, pages 10912–10938. PMLR, 2022.
  37. Weight-sharing beyond neural architecture search: Efficient feature map selection and federated hyperparameter tuning. In Proc. 2nd SysML Conf., 2019.
  38. Federated hyperparameter tuning: Challenges, baselines, and connections to weight-sharing. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  39. Crypten: Secure multi-party computation meets machine learning. Advances in Neural Information Processing Systems, 34:4961–4973, 2021.
  40. Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs:1610.02527, 2016.
  41. Federated learning: Strategies for improving communication efficiency. CoRR, abs/1610.05492, 2016.
  42. A. Koskela and T. Kulkarni. Practical differentially private hyperparameter tuning with subsampling. CoRR, arXiv:2301.11989, 2023.
  43. A. Krizhevsky. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2012.
  44. On noisy evaluation in federated hyperparameter tuning. In MLSys, 2023.
  45. Efficient backprop. 08 2000.
  46. Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010.
  47. Efficient BackProp, pages 9–48. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
  48. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965–978, 2022.
  49. Privacy-preserving federated brain tumour segmentation. In Springer MLMI, 2019.
  50. G. Marino. Federated dbscan. Bachelor’s thesis, Università di Pisa, Dipartimento di Ingegneria dell’Informazione, 2022.
  51. Federated learning of deep networks using model averaging. CoRR, abs/1602.05629, 2016.
  52. Learning differentially private recurrent language models. CoRR, abs/1710.06963, 2018.
  53. Learning differentially private recurrent language models. In ICLR, 2018.
  54. Exploiting unintended feature leakage in collaborative learning. In IEEE S&P, 2019.
  55. The application of Bayesian methods for seeking the extremum, volume 2, pages 117–129. 09 2014.
  56. The role of adaptive optimizers for honest private hyperparameter selection. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7):7806–7813, Jun. 2022.
  57. H. Mostafa. Robust federated learning through representation matching and adaptive hyper-parameters, 2020.
  58. Multiparty homomorphic encryption: From theory to practice. In Technical Report https://eprint.iacr.org/2020/304, 2019.
  59. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In IEEE S&P, 2019.
  60. Reading digits in natural images with unsupervised feature learning. NIPS, 2011.
  61. F. O. Ozkok and M. Celik. A new approach to determine eps parameter of dbscan algorithm. International Journal of Intelligent Systems and Applications in Engineering, 5(4):247–251, Dec. 2017.
  62. N. Papernot and T. Steinke. Hyperparameter tuning with renyi differential privacy. CoRR, abs/2110.03620, 2022.
  63. Privacy-preserving deep learning: Revisited and enhanced. In Springer ATIS, 2017.
  64. Privacy-preserving deep learning via additively homomorphic encryption. IEEE TIFS, 13(5):1333–1345, 2018.
  65. Privacy-preserving classification of personal text messages with secure multi-party computation: An application to hate-speech detection. CoRR, abs:1906.02325, 2021.
  66. FedJAX: Federated learning simulation with JAX. arXiv preprint arXiv:2108.02117, 2021.
  67. Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data Mining and Knowledge Discovery, 2:169–194, 1998.
  68. Privacy-preserving federated neural network learning for disease-associated cell classification. bioRxiv, 2022.
  69. Privacy-preserving federated recurrent neural networks. CoRR, abs/2207.13947, 2022.
  70. Poseidon: Privacy-preserving federated neural network learning. In Network and Distributed System Security Symposium (NDSS), 2021.
  71. Feathers: Federated architecture and hyperparameter search. CoRR, arXiv:2206.12342, 2023.
  72. R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In ACM Conference on Computer and Communications Security (CCS), 2015.
  73. Synthetic data–anonymisation groundhog day. In 31st USENIX Security Symposium (USENIX Security 22), pages 1451–1468, 2022.
  74. A hybrid approach to privacy-preserving federated learning. In ACM AISec, 2019.
  75. Securenn: 3-party secure computation for neural network training. PETS, 2019.
  76. FALCON: Honest-majority maliciously secure framework for private deep learning. PETS, 2020.
  77. User-level label leakage from gradients in federated learning. PETS, 2022:227–244, 04 2022.
  78. Dp-hypo: An adaptive private hyperparameter optimization framework, 2023.
  79. Adaptive federated learning in resource constrained edge computing systems. CoRR, arXiv:1804.05271, 2019.
  80. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE INFOCOM, 2019.
  81. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020.
  82. The value of collaboration in convex machine learning with differential privacy. CoRR, abs/1906.09679, 2019.
  83. How does selection leak privacy: Revisiting private selection and improved results for hyper-parameter tuning. CoRR, abs:2402.13087, 2024.
  84. Federated neural architecture search. CoRR, arXiv:2002.06352, 2020.
  85. Byzantine-robust distributed learning: Towards optimal statistical rates. CoRR, abs/1803.01498, 2021.
  86. On the linear speedup analysis of communication efficient momentum sgd for distributed non-convex optimization. CoRR, abs/1905.03817, 2019.
  87. Demystifying hyperparameter optimization in federated learning, 2022.
  88. Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Conference on Usenix Annual Technical Conference, USENIX ATC’20, USA, 2020. USENIX Association.
  89. Fedtune: Automatic tuning of federated learning hyper-parameters from system perspective. CoRR, abs/2110.03061, 2022.
  90. iDLG: Improved deep leakage from gradients. CoRR, abs/2001.02610, 2020.
  91. Helen: Maliciously secure coopetitive learning for linear models. In IEEE S&P, 2019.
  92. Single-shot general hyper-parameter optimization for federated learning. In International Conference on Learning Representations, 2023.
  93. H. Zhu and Y. Jin. Multi-objective evolutionary federated learning. CoRR, arXiv:1812.07478, 2019.
  94. H. Zhu and Y. Jin. Real-time federated evolutionary neural architecture search. CoRR, arXiv:2003.02793, 2020.
  95. Privacy-preserving weighted federated learning within the secret sharing framework. IEEE Access, 8:198275–198284, 2020.
  96. From federated learning to federated neural architecture search: a survey. Complex & Intelligent Systems, 7, 01 2021.
  97. Deep leakage from gradients. In NeurIPS, volume 32. 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Natalija Mitic (1 paper)
  2. Apostolos Pyrgelis (24 papers)
  3. Sinem Sav (9 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.