Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics (2401.05146v3)

Published 10 Jan 2024 in cs.LG and cs.CR

Abstract: Federated learning (FL) enables collaborative training of a ML model across multiple parties, facilitating the preservation of users' and institutions' privacy by maintaining data stored locally. Instead of centralizing raw data, FL exchanges locally refined model parameters to build a global model incrementally. While FL is more compliant with emerging regulations such as the European General Data Protection Regulation (GDPR), ensuring the right to be forgotten in this context - allowing FL participants to remove their data contributions from the learned model - remains unclear. In addition, it is recognized that malicious clients may inject backdoors into the global model through updates, e.g., to generate mispredictions on specially crafted data examples. Consequently, there is the need for mechanisms that can guarantee individuals the possibility to remove their data and erase malicious contributions even after aggregation, without compromising the already acquired "good" knowledge. This highlights the necessity for novel federated unlearning (FU) algorithms, which can efficiently remove specific clients' contributions without full model retraining. This article provides background concepts, empirical evidence, and practical guidelines to design/implement efficient FU schemes. This study includes a detailed analysis of the metrics for evaluating unlearning in FL and presents an in-depth literature review categorizing state-of-the-art FU contributions under a novel taxonomy. Finally, we outline the most relevant and still open technical challenges, by identifying the most promising research directions in the field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (103)
  1. J. McAuley and J. Leskovec, “Hidden Factors and Hidden Topics: Understanding Rating Dimensions with Review Text,” in Proceedings of the 7th ACM Conference on Recommender Systems, ser. RecSys ’13.   New York, NY, USA: Association for Computing Machinery, 2013, p. 165–172.
  2. Z. Yi, X. Wang, I. Ounis, and C. Macdonald, “Multi-modal graph contrastive learning for micro-video recommendation,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 1807–1811.
  3. Wikipedia, “Facebook–cambridge analytica data scandal,” URL https://en.wikipedia.org/wiki/Facebook-Cambridge_Analytica_data_scandal, accessed on October 2023.
  4. E. Union, “Complete guide to general data protection regulation compliance,” URL https://gdpr.eu/, accessed on October 2023.
  5. S. of California Department of Justice, “California consumer privacy act (ccpa),” URL https://oag.ca.gov/privacy/ccpa, accessed on October 2023.
  6. H. Xu, T. Zhu, L. Zhang, W. Zhou, and P. S. Yu, “Machine unlearning: A survey,” ACM Computing Surveys, vol. 56, no. 1, pp. 1–36, 2023.
  7. Y. Cao and J. Yang, “Towards Making Systems Forget with Machine Unlearning,” in 2015 IEEE Symposium on Security and Privacy, 2015, pp. 463–480.
  8. L. Zhu, Z. Liu, and S. Han, “Deep Leakage from Gradients,” in Proc. of Conference on Neural Information Processing Systems, 2019, pp. 14 747–14 756.
  9. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP).   IEEE, 2017, pp. 3–18.
  10. J. Yang and Y. Zhao, “A Survey of Federated Unlearning: A Taxonomy, Challenges and Future Directions,” arXiv preprint arXiv:2310.19218, 2023.
  11. F. Wang, B. Li, and B. Li, “Federated unlearning and its privacy threats,” IEEE Network, pp. 1–7, 2023.
  12. Z. Liu, Y. Jiang, J. Shen, M. Peng, K.-Y. Lam, and X. Yuan, “A survey on federated unlearning: Challenges, methods, and future directions,” arXiv preprint arXiv:2310.20448, 2023.
  13. H. Zhang, T. Nakamura, T. Isohara, and K. Sakurai, “A review on machine unlearning,” SN Computer Science, vol. 4, no. 4, p. 337, 2023.
  14. Y. Qu, X. Yuan, M. Ding, W. Ni, T. Rakotoarivelo, and D. Smith, “Learn to Unlearn: A Survey on Machine Unlearning,” arXiv preprint arXiv:2305.07512, 2023.
  15. A. Golatkar, A. Achille, and S. Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9304–9312.
  16. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  17. P. Bellavista, L. Foschini, and A. Mora, “Decentralised Learning in Federated Deployment Environments: A System-Level Survey,” ACM Computing Surveys (CSUR), vol. 54, no. 1, pp. 1–38, 2021.
  18. S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečnỳ, S. Kumar, and H. B. McMahan, “Adaptive federated optimization,” arXiv preprint arXiv:2003.00295, 2020.
  19. L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 691–706.
  20. M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inference attacks,” arXiv preprint arXiv:1812.00910, 2018.
  21. T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, and N. Díaz-Rodríguez, “Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,” Information fusion, vol. 58, pp. 52–68, 2020.
  22. R. Kemker, M. McClure, A. Abitino, T. Hayes, and C. Kanan, “Measuring catastrophic forgetting in neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  23. G. Legate, L. Caccia, and E. Belilovsky, “Re-weighted softmax cross-entropy to control forgetting in federated learning,” arXiv preprint arXiv:2304.05260, 2023.
  24. D. Caldarola, B. Caputo, and M. Ciccone, “Improving Generalization in Federated Learning by Seeking Flat Minima,” in Proc. of European Computer Vision Conference.   Springer, 2022, pp. 654–672.
  25. G. Lee, M. Jeong, Y. Shin, S. Bae, and S.-Y. Yun, “Preservation of the global knowledge by not-true distillation in federated learning,” in Advances in Neural Information Processing Systems, 2022.
  26. X. Gao, X. Ma, J. Wang, Y. Sun, B. Li, S. Ji, P. Cheng, and J. Chen, “Verifi: Towards verifiable federated unlearning,” arXiv preprint arXiv:2205.12709, 2022.
  27. C. Dwork, “Differential privacy,” Encyclopedia of Cryptography and Security, pp. 338–340, 2011.
  28. C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  29. H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private language models without losing accuracy,” arXiv preprint arXiv:1710.06963, 2017.
  30. R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” arXiv preprint arXiv:1712.07557, 2017.
  31. E. Bagdasaryan, O. Poursaeed, and V. Shmatikov, “Differential privacy has disparate impact on model accuracy,” in Advances in Neural Information Processing Systems, 2019, pp. 15 453–15 462.
  32. L. Wu, S. Guo, J. Wang, Z. Hong, J. Zhang, and Y. Ding, “Federated unlearning: Guarantee the right of clients to forget,” IEEE Network, vol. 36, no. 5, pp. 129–135, 2022.
  33. E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “Segformer: Simple and efficient design for semantic segmentation with transformers,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 077–12 090, 2021.
  34. S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, “A Hybrid Approach to Privacy-preserving Federated Learning,” in Proc. of the ACM Workshop on Artificial Intelligence and Security, 2019, pp. 1–11.
  35. C. Zhang, S. Ekanut, L. Zhen, and Z. Li, “Augmented multi-party computation against gradient leakage in federated learning,” IEEE Transactions on Big Data, 2022.
  36. V. S. Chundawat, A. K. Tarun, M. Mandal, and M. Kankanhalli, “Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, 2023, pp. 7210–7217.
  37. T.-M. H. Hsu, H. Qi, and M. Brown, “Measuring the effects of non-identical data distribution for federated visual classification,” arXiv preprint arXiv:1909.06335, 2019.
  38. Q. Li, Y. Diao, Q. Chen, and B. He, “Federated Learning on Non-iid Data Silos: An Experimental Study,” in Proc. of IEEE International Conference on Data Engineering (ICDE).   IEEE, 2022, pp. 965–978.
  39. M. P. Naeini, G. Cooper, and M. Hauskrecht, “Obtaining well calibrated probabilities using bayesian binning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 29, no. 1, 2015.
  40. Y. Liu, Z. Ma, X. Liu, and J. Ma, “Learn to forget: User-level memorization elimination in federated learning,” arXiv preprint arXiv:2003.10933, vol. 1, 2020.
  41. P. W. Koh and P. Liang, “Understanding black-box predictions via influence functions,” in International conference on machine learning.   PMLR, 2017, pp. 1885–1894.
  42. T. Shaik, X. Tao, L. Li, H. Xie, T. Cai, X. Zhu, and Q. Li, “FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning,” arXiv preprint arXiv:2309.10283, 2023.
  43. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  44. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International conference on artificial intelligence and statistics.   PMLR, 2020, pp. 2938–2948.
  45. M. Fang, X. Cao, J. Jia, and N. Gong, “Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1605–1622.
  46. A. Golatkar, A. Achille, A. Ravichandran, M. Polito, and S. Soatto, “Mixed-privacy forgetting in deep networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 792–801.
  47. S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, “Privacy risk in machine learning: Analyzing the connection to overfitting,” in 2018 IEEE 31st computer security foundations symposium (CSF).   IEEE, 2018, pp. 268–282.
  48. H. Hu, Z. Salcic, G. Dobbie, J. Chen, L. Sun, and X. Zhang, “Membership inference via backdooring,” in The 31st International Joint Conference on Artificial Intelligence (IJCAI-22), 2022.
  49. A. Halimi, S. Kadhe, A. Rawat, and N. Baracaldo, “Federated unlearning: How to efficiently erase a client in fl?” arXiv preprint arXiv:2207.05521, 2022.
  50. N. Su and B. Li, “Asynchronous federated unlearning,” in IEEE INFOCOM 2023 - IEEE Conference on Computer Communications, 2023, pp. 1–10.
  51. Y. Liu, L. Xu, X. Yuan, C. Wang, and B. Li, “The right to be forgotten in federated learning: An efficient realization with rapid retraining,” in IEEE INFOCOM 2022-IEEE Conference on Computer Communications.   IEEE, 2022, pp. 1749–1758.
  52. J. Wang, S. Guo, X. Xie, and H. Qi, “Federated Unlearning via Class-Discriminative Pruning,” in Proceedings of the ACM Web Conference 2022, ser. WWW ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 622–632.
  53. G. Liu, X. Ma, Y. Yang, C. Wang, and J. Liu, “Federaser: Enabling efficient client-level data removal from federated learning models,” in 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), 2021, pp. 1–10.
  54. J. Gong, O. Simeone, and J. Kang, “Bayesian variational federated learning and unlearning in decentralized networks,” in 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2021, pp. 216–220.
  55. Y. Zhao, P. Wang, H. Qi, J. Huang, Z. Wei, and Q. Zhang, “Federated unlearning with momentum degradation,” IEEE Internet of Things Journal, pp. 1–1, 2023.
  56. W. Yuan, H. Yin, F. Wu, S. Zhang, T. He, and H. Wang, “Federated unlearning for on-device recommendation,” in Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 2023, pp. 393–401.
  57. X. Cao, J. Jia, Z. Zhang, and N. Z. Gong, “Fedrecover: Recovering from poisoning attacks in federated learning using historical information,” in 2023 IEEE Symposium on Security and Privacy (SP), 2023, pp. 1366–1383.
  58. X. Guo, P. Wang, S. Qiu, W. Song, Q. Zhang, X. Wei, and D. Zhou, “FAST: Adopting Federated Unlearning to Eliminating Malicious Terminals at Server Side,” IEEE Transactions on Network Science and Engineering, pp. 1–14, 2023.
  59. C. Wu, S. Zhu, and P. Mitra, “Federated unlearning with knowledge distillation,” arXiv preprint arXiv:2201.09441, 2022.
  60. G. Ye, Q. V. H. Nguyen, and H. Yin, “Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation,” arXiv preprint arXiv:2308.13269, 2023.
  61. G. Li, L. Shen, Y. Sun, Y. Hu, H. Hu, and D. Tao, “Subspace based federated unlearning,” arXiv preprint arXiv:2302.12448, 2023.
  62. M. Alam, H. Lamri, and M. Maniatakos, “Get rid of your trail: Remotely erasing backdoors in federated learning,” arXiv preprint arXiv:2304.10638, 2023.
  63. J. Gong, J. Kang, O. Simeone, and R. Kassab, “Forget-svgd: Particle-based bayesian federated unlearning,” in 2022 IEEE Data Science and Learning Workshop (DSLW), 2022, pp. 1–6.
  64. J. Gong, O. Simeone, and J. Kang, “Compressed particle-based federated bayesian learning and unlearning,” IEEE Communications Letters, vol. 27, no. 2, pp. 556–560, 2023.
  65. W. Wang, Z. Tian, C. Zhang, A. Liu, and S. Yu, “BFU: Bayesian Federated Unlearning with Parameter Self-Sharing,” in Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, ser. ASIA CCS ’23.   New York, NY, USA: Association for Computing Machinery, 2023, p. 567–578.
  66. L. Zhang, T. Zhu, H. Zhang, P. Xiong, and W. Zhou, “FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4732–4746, 2023.
  67. Y. Liu, Z. Ma, Y. Yang, X. Liu, J. Ma, and K. Ren, “RevFRF: Enabling Cross-Domain Random Forest Training With Revocable Federated Learning,” IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 6, pp. 3671–3685, 2022.
  68. C. Pan, J. Sima, S. Prakash, V. Rana, and O. Milenkovic, “Machine unlearning of federated clusters,” arXiv preprint arXiv:2210.16424, 2022.
  69. A. Dhasade, Y. Ding, S. Guo, A.-m. Kermarrec, M. De Vos, and L. Wu, “QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation,” arXiv preprint arXiv:2311.15603, 2023.
  70. P. Wang, Z. Yan, M. S. Obaidat, Z. Yuan, L. Yang, J. Zhang, Z. Wei, and Q. Zhang, “Edge Caching with Federated Unlearning for Low-latency V2X Communications,” IEEE Communications Magazine, pp. 1–7, 2023.
  71. R. Jin, M. Chen, Q. Zhang, and X. Li, “Forgettable Federated Linear Learning with Certified Data Removal,” arXiv preprint arXiv:2306.02216, 2023.
  72. H. Xia, S. Xu, J. Pei, R. Zhang, Z. Yu, W. Zou, L. Wang, and C. Liu, “FedME2: Memory Evaluation & Erase Promoting Federated Unlearning in DTMN,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 11, pp. 3573–3588, 2023.
  73. Z. Xiong, W. Li, Y. Li, and Z. Cai, “Exact-Fun: An Exact and Efficient Federated Unlearning Approach.”
  74. T. Che, Y. Zhou, Z. Zhang, L. Lyu, J. Liu, D. Yan, D. Dou, and J. Huan, “Fast federated machine unlearning with nonlinear functional theory,” in Proceedings of the 40th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202.   PMLR, 23–29 Jul 2023, pp. 4241–4268.
  75. X. Zhu, G. Li, and W. Hu, “Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning,” in Proceedings of the ACM Web Conference 2023, ser. WWW ’23.   New York, NY, USA: Association for Computing Machinery, 2023, p. 2444–2454.
  76. J. Nocedal, “Updating quasi-newton matrices with limited storage,” Mathematics of Computation, vol. 35, no. 151, pp. 773–782, 1980.
  77. A. Mora, I. Tenison, P. Bellavista, and I. Rish, “Knowledge distillation for federated learning: a practical guide,” arXiv preprint arXiv:2211.04742, 2022.
  78. A. Hoecker and V. Kartvelishvili, “Svd approach to data unfolding,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 372, no. 3, pp. 469–481, 1996.
  79. Q. Liu and D. Wang, “Stein variational gradient descent: A general purpose bayesian inference algorithm,” Advances in neural information processing systems, vol. 29, 2016.
  80. R. Kassab and O. Simeone, “Federated generalized bayesian learning via distributed stein variational gradient descent,” IEEE Transactions on Signal Processing, vol. 70, pp. 2180–2192, 2022.
  81. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  82. J. Geng, Z. Chen, Y. Wang, H. Woisetschlaeger, S. Schimmler, R. Mayer, Z. Zhao, and C. Rong, “A Survey on Dataset Distillation: Approaches, Applications and Future Directions,” arXiv preprint arXiv:2305.01975, 2023.
  83. Z. Zhang, Y. Zhou, X. Zhao, T. Che, and L. Lyu, “Prompt certified machine unlearning with randomized gradient smoothing and quantization,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35.   Curran Associates, Inc., 2022, pp. 13 433–13 455. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/5771d9f214b75be6ff20f63bba315644-Paper-Conference.pdf
  84. S. Ji, S. Pan, E. Cambria, P. Marttinen, and P. S. Yu, “A Survey on Knowledge Graphs: Representation, Acquisition, and Applications,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 2, pp. 494–514, 2022.
  85. H. Peng, H. Li, Y. Song, V. Zheng, and J. Li, “Differentially Private Federated Knowledge Graphs Embedding,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, ser. CIKM ’21.   New York, NY, USA: Association for Computing Machinery, 2021, p. 1416–1425. [Online]. Available: https://doi.org/10.1145/3459637.3482252
  86. S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečnỳ, H. B. McMahan, V. Smith, and A. Talwalkar, “Leaf: A benchmark for federated settings,” arXiv preprint arXiv:1812.01097, 2018.
  87. TensorFlow, “Tensorflow federated,” https://flower.dev/docs/datasets/d_2, 2023.
  88. D. J. Beutel, T. Topal, A. Mathur, X. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, K. H. Li, T. Parcollet, P. P. B. de Gusmão et al., “Flower: A friendly federated learning research framework,” arXiv preprint arXiv:2007.14390, 2020.
  89. Flower, “Flower datasets,” https://www.tensorflow.org/federated, 2023.
  90. N. Ding, Z. Sun, E. Wei, and R. Berry, “Incentive mechanism design for federated learning and unlearning,” in Proceedings of the Twenty-Fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, ser. MobiHoc ’23.   New York, NY, USA: Association for Computing Machinery, 2023, p. 11–20.
  91. Z. Deng, Z. Han, C. Ma, M. Ding, L. Yuan, C. Ge, and Z. Liu, “Vertical federated unlearning on the logistic regression model,” Electronics, vol. 12, no. 14, 2023.
  92. S. Feng and H. Yu, “Multi-participant multi-class vertical federated learning,” arXiv preprint arXiv:2001.11154, 2020.
  93. C. Mazzocca, N. Romandini, M. Mendula, R. Montanari, and P. Bellavista, “TruFLaaS: Trustworthy Federated Learning as a Service,” IEEE Internet of Things Journal, vol. 10, no. 24, pp. 21 266–21 281, 2023.
  94. K. Singhal, H. Sidahmed, Z. Garrett, S. Wu, J. Rush, and S. Prakash, “Federated Reconstruction: Partially Local Federated Learning,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34.   Curran Associates, Inc., 2021, pp. 11 220–11 232.
  95. C. Chen, L. Lyu, H. Yu, and G. Chen, “Practical Attribute Reconstruction Attack Against Federated Learning,” IEEE Transactions on Big Data, pp. 1–1, 2022.
  96. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting Gradients - How easy is it to break privacy in federated learning?” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33.   Curran Associates, Inc., 2020, pp. 16 937–16 947.
  97. A. Acar, H. Aksu, A. S. Uluagac, and M. Conti, “A Survey on Homomorphic Encryption Schemes: Theory and Implementation,” ACM Comput. Surv., vol. 51, no. 4, jul 2018.
  98. D. Byrd and A. Polychroniadou, “Differentially Private Secure Multi-Party Computation for Federated Learning in Financial Applications,” in Proceedings of the First ACM International Conference on AI in Finance, ser. ICAIF ’20.   New York, NY, USA: Association for Computing Machinery, 2021.
  99. A. Mora, D. Fantini, and P. Bellavista, “Federated Learning Algorithms with Heterogeneous Data Distributions: An Empirical Evaluation,” in Proc. of IEEE/ACM Symposium on Edge Computing (SEC).   IEEE, 2022, pp. 336–341.
  100. A. Krizhevsky, “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.
  101. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-UCSD Birds 200,” California Institute of Technology, Tech. Rep. CNS-TR-2010-001, 2010.
  102. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi, “Fine-grained visual classification of aircraft,” Tech. Rep., 2013.
  103. E. Hu, Y. Tang, A. Kyrillidis, and C. Jermaine, “Federated learning over images: Vertical decompositions and pre-trained backbones are difficult to beat,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19 385–19 396.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nicolò Romandini (1 paper)
  2. Alessio Mora (5 papers)
  3. Carlo Mazzocca (4 papers)
  4. Rebecca Montanari (5 papers)
  5. Paolo Bellavista (20 papers)
Citations (10)