Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy and Fairness in Federated Learning: on the Perspective of Trade-off (2306.14123v1)

Published 25 Jun 2023 in cs.LG, cs.AI, cs.CR, and cs.CY

Abstract: Federated learning (FL) has been a hot topic in recent years. Ever since it was introduced, researchers have endeavored to devise FL systems that protect privacy or ensure fair results, with most research focusing on one or the other. As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied. However, since privacy and fairness compete, considering each in isolation will inevitably come at the cost of the other. To provide a broad view of these two critical topics, we presented a detailed literature review of privacy and fairness issues, highlighting unique challenges posed by FL and solutions in federated settings. We further systematically surveyed different interactions between privacy and fairness, trying to reveal how privacy and fairness could affect each other and point out new research directions in fair and private FL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (191)
  1. Mitigating bias in federated learning. arXiv preprint arXiv:2012.02447 (2020).
  2. A reductions approach to fair classification. In International Conference on Machine Learning. PMLR, 60–69.
  3. Federated residual learning. arXiv preprint arXiv:2003.12880 (2020).
  4. Moustafa Alzantot and Mani Srivastava. [n.d.]. Differential Privacy Synthetic Data Generation using WGANs, 2019. URL https://github. com/nesl/nist_differential_privacy_ synthetic_data_challenge ([n. d.]).
  5. Privacy-preserving deep learning: Revisited and enhanced. In International Conference on Applications and Techniques in Information Security. Springer, 100–110.
  6. Wasserstein generative adversarial networks. In International conference on machine learning. PMLR, 214–223.
  7. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers. https://doi.org/10.48550/ARXIV.1306.4447
  8. Equalized odds postprocessing under imperfect group information. In International conference on artificial intelligence and statistics. PMLR, 1770–1780.
  9. Differential privacy has disparate impact on model accuracy. Advances in Neural Information Processing Systems 32 (2019), 15479–15488.
  10. Bayesian framework for gradient leakage. arXiv preprint arXiv:2111.04706 (2021).
  11. A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017).
  12. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociological Methods & Research 50, 1 (2021), 3–44. https://doi.org/10.1177/0049124118782533
  13. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research 50, 1 (2021), 3–44.
  14. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems 29 (2016), 4349–4357.
  15. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175–1191.
  16. Federated learning with autotuned communication-efficient secure aggregation. In 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 1222–1226.
  17. Shikha Bordia and Samuel R Bowman. 2019. Identifying and reducing gender bias in word-level language models. arXiv preprint arXiv:1904.03035 (2019).
  18. Federated learning with hierarchical clustering of local updates to improve training on non-IID data. 2020 International Joint Conference on Neural Networks (IJCNN) (2020), 1–9.
  19. Understanding the origins of bias in word embeddings. In International Conference on Machine Learning. PMLR, 803–811.
  20. Toon Calders and Sicco Verwer. 2010. Three naive bayes approaches for discrimination-free classification. Data mining and knowledge discovery 21, 2 (2010), 277–292.
  21. From Soft Classifiers to Hard Decisions: How fair can we be? arXiv:1810.02003 [cs.LG]
  22. Hongyan Chang and Reza Shokri. 2021. On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 292–303.
  23. Differentially private empirical risk minimization. Journal of Machine Learning Research 12, 3 (2011).
  24. pFL-Bench: A Comprehensive Benchmark for Personalized Federated Learning. arXiv preprint arXiv:2206.03655 (2022).
  25. Hong-You Chen and Wei-Lun Chao. 2021. On Bridging Generic and Personalized Federated Learning. arXiv preprint arXiv:2107.00778 (2021).
  26. Differential privacy protection against membership inference attack on machine learning for genomic data. In BIOCOMPUTING 2021: Proceedings of the Pacific Symposium. World Scientific, 26–37.
  27. Improved Techniques for Model Inversion Attacks. arXiv preprint arXiv:2010.04092 (2020).
  28. A training-integrity privacy-preserving federated learning scheme with trusted execution environment. Information Sciences 522 (2020), 69–79.
  29. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv preprint arXiv:2010.01243 (2020).
  30. Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
  31. Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810 (2018).
  32. FedFair: Training Fair Models In Cross-Silo Federated Learning. ArXiv abs/2109.05662 (2021).
  33. Addressing algorithmic disparity and performance inconsistency in federated learning. Advances in Neural Information Processing Systems 34 (2021).
  34. On the compatibility of privacy and fairness. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization. 309–315.
  35. Conscientious classification: A data scientist’s guide to discrimination-aware classification. Big data 5, 2 (2017), 120–134.
  36. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461 (2020).
  37. Differentially private and fair classification via calibrated functional mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 622–629.
  38. FedU: A Unified Framework for Federated Multi-Task Learning with Laplacian Regularization. arXiv preprint arXiv:2102.07148 (2021).
  39. Irit Dinur and Kobbi Nissim. 2003. Revealing information while preserving privacy. In Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. 202–210.
  40. Fairness-aware Agnostic Federated Learning. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). SIAM, 181–189.
  41. Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems. IEEE Transactions on Parallel & Distributed Systems 32, 01 (2021), 59–71. https://doi.org/10.1109/TPDS.2020.3009406
  42. Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation. Springer, 1–19.
  43. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214–226.
  44. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography, Shai Halevi and Tal Rabin (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 265–284.
  45. Disparate Impact in Differential Privacy from Gradient Misalignment. arXiv preprint arXiv:2206.07737 (2022).
  46. Fairfed: Enabling group fairness in federated learning. arXiv preprint arXiv:2110.00857 (2021).
  47. Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach. In Advances in Neural Information Processing Systems, Vol. 33. 3557–3568.
  48. Rethinking privacy preserving deep learning: How to evaluate and thwart privacy attacks. In Federated Learning. Springer, 32–50.
  49. Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy. In Proceedings of the 2020 workshop on privacy-preserving machine learning in practice. 15–19.
  50. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259–268.
  51. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning. PMLR, 1126–1135.
  52. Joel Escudé Font and Marta R Costa-jussà. 2019. Equalizing Gender Bias in Neural Machine Translation with Word Embeddings Techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing. 147–154.
  53. Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models. arXiv preprint arXiv:2110.13057 (2021).
  54. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. 1322–1333.
  55. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In 23rd USENIX Security Symposium (USENIX Security 14). USENIX Association, San Diego, CA, 17–32. https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/fredrikson_matthew
  56. Enforcing fairness in private federated learning via the modified method of differential multipliers. ArXiv abs/2109.08604 (2021).
  57. Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data. In International Conference on Machine Learning. PMLR, 6944–6959.
  58. Property Inference Attacks on Fully Connected Neural Networks Using Permutation Invariant Representations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (Toronto, Canada) (CCS ’18). Association for Computing Machinery, New York, NY, USA, 619–633. https://doi.org/10.1145/3243734.3243834
  59. Inverting Gradients–How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).
  60. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
  61. An Efficient Framework for Clustered Federated Learning. Advances in Neural Information Processing Systems 33 (2020).
  62. Satisfying real-world goals with dataset constraints. In Advances in Neural Information Processing Systems. 2415–2423.
  63. Gene H Golub and Charles F Van Loan. 2013. Matrix computations. JHU press.
  64. Bryce W Goodman. 2016. A step towards accountable algorithms?: Algorithmic discrimination and the European Union general data protection. In 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona. NIPS Foundation.
  65. Yerbabuena: Securing deep learning inference data via enclave-based ternary model partitioning. (2019).
  66. Proxy Fairness. arXiv:1806.11212 [cs.LG]
  67. Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications 116 (2018), 1–8.
  68. Recovering private text in federated learning of language models. arXiv preprint arXiv:2205.08514 (2022).
  69. Fedsketch: Communication-efficient and private federated learning via sketching. arXiv preprint arXiv:2008.04975 (2020).
  70. Filip Hanzely and Peter Richtárik. 2021. Federated Learning of a Mixture of Global and Local Models. arXiv:2002.05516 [cs.LG]
  71. Towards Fair Federated Learning With Zero-Shot Data Augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. 3310–3319.
  72. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018).
  73. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323.
  74. Gradvit: Gradient inversion of vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10021–10030.
  75. Deep models under the GAN: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 603–618.
  76. FedMGDA+: Federated Learning meets Multi-objective Optimization. arXiv:2006.11489 [cs.LG]
  77. Fairness and Accuracy in Federated Learning. arXiv:2012.10069 [cs.LG]
  78. Evaluating gradient inversion attacks and defenses in federated learning. Advances in Neural Information Processing Systems 34 (2021), 7232–7241.
  79. Instahide: Instance-hiding schemes for private distributed learning. In International conference on machine learning. PMLR, 4507–4518.
  80. Efficient deep learning on multi-source private data. arXiv preprint arXiv:1807.06689 (2018).
  81. Differentially private fair learning. In International Conference on Machine Learning. PMLR, 3000–3008.
  82. Gradient Inversion with Generative Image Prior. Advances in Neural Information Processing Systems 34 (2021).
  83. Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data. arXiv:1811.11479 [cs.LG]
  84. Improving Federated Learning Personalization via Model Agnostic Meta Learning. arXiv:1909.12488 [cs.LG]
  85. Generalized demographic parity for group fairness. In International Conference on Learning Representations.
  86. FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo Federated Learning. arXiv preprint arXiv:2109.00675 (2021).
  87. Catastrophic Data Leakage in Vertical Federated Learning. Advances in Neural Information Processing Systems 34 (2021).
  88. PATE-GAN: Generating synthetic data with differential privacy guarantees. In International conference on learning representations.
  89. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019).
  90. Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination. arXiv:1906.00285 [stat.ML]
  91. Faisal Kamiran and Toon Calders. 2010. Classification with no discrimination by preferential sampling. In Proc. 19th Machine Learning Conf. Belgium and The Netherlands. Citeseer, 1–6.
  92. Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and information systems 33, 1 (2012), 1–33.
  93. Fairness-aware classifier with prejudice remover regularizer. In Joint European conference on machine learning and knowledge discovery in databases. Springer, 35–50.
  94. Fairness-aware Learning through Regularization Approach. In 2011 IEEE 11th International Conference on Data Mining Workshops. 643–650. https://doi.org/10.1109/ICDMW.2011.83
  95. Fair Federated Learning for Heterogeneous Data. In 5th Joint International Conference on Data Science & Management of Data (9th ACM IKDD CODS and 27th COMAD) (Bangalore, India) (CODS-COMAD 2022). Association for Computing Machinery, New York, NY, USA, 298–299. https://doi.org/10.1145/3493700.3493750
  96. On secret sharing systems. IEEE Transactions on Information Theory 29, 1 (1983), 35–41.
  97. OLIVE: Oblivious and Differentially Private Federated Learning on Trusted Execution Environment. arXiv preprint arXiv:2202.07165 (2022).
  98. Improving Fairness and Privacy in Selection Problems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 8092–8100.
  99. Haitham Khedr and Yasser Shoukry. 2022. Certifair: A framework for certified global fairness of neural networks. arXiv preprint arXiv:2205.09927 (2022).
  100. Adaptive Gradient-Based Meta-Learning Methods. Advances in Neural Information Processing Systems 32 (2019), 5917–5928.
  101. Blind justice: Fairness with encrypted sensitive attributes. In International Conference on Machine Learning. PMLR, 2630–2639.
  102. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 13 (2017), 3521–3526.
  103. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
  104. Survey of personalization techniques for federated learning. In 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). IEEE, 794–797.
  105. Fair decision making using privacy-protected data. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020).
  106. Counterfactual Fairness. arXiv:1703.06856 [stat.ML]
  107. Noise-tolerant fair classification. Advances in Neural Information Processing Systems 32 (2019).
  108. An axiomatic theory of fairness in network resource allocation. IEEE.
  109. Klas Leino and Matt Fredrikson. 2020. Stolen Memories: Leveraging Model Memorization for Calibrated {{\{{White-Box}}\}} Membership Inference. In 29th USENIX security symposium (USENIX Security 20). 1605–1622.
  110. Daliang Li and Junpu Wang. 2019. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019).
  111. Ditto: Fair and Robust Federated Learning Through Personalization. In Proceedings of the 38th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 6357–6368. https://proceedings.mlr.press/v139/li21h.html
  112. Privacy for free: Communication-efficient learning with differential privacy using sketches. arXiv preprint arXiv:1911.00972 (2019).
  113. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems 2 (2020), 429–450.
  114. Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019).
  115. SoteriaFL: A unified framework for private federated learning with communication compression. arXiv preprint arXiv:2206.09888 (2022).
  116. Think Locally, Act Globally: Federated Learning with Local and Global Representations. arXiv:2001.01523 [cs.LG]
  117. Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness. Advances in Neural Information Processing Systems (2022).
  118. Junxu Liu and Xiaofeng Meng. 2020. Survey on privacy-preserving machine learning. Journal of Computer Research and Development 57, 2 (2020), 346.
  119. A secure federated transfer learning framework. IEEE Intelligent Systems 35, 4 (2020), 70–82.
  120. Bias Mitigation Post-processing for Individual and Group Fairness. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2847–2851. https://doi.org/10.1109/ICASSP.2019.8682620
  121. The Variational Fair Autoencoder. In ICLR.
  122. Stochastic Differentially Private and Fair Learning. International Conference on Learning Representations (2023).
  123. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (Eds.). https://openreview.net/forum?id=AFiH_CNnVhS
  124. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).
  125. NOSnoop: An effective collaborative meta-learning scheme against property inference attack. IEEE Internet of Things Journal 9, 9 (2021), 6778–6789.
  126. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619 (2020).
  127. Minimax Pareto Fairness: A Multi Objective Perspective. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 6755–6764.
  128. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561 (2019).
  129. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
  130. Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07). IEEE, 94–103.
  131. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
  132. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 691–706.
  133. Ilya Mironov. 2017. Rényi differential privacy. In 2017 IEEE 30th computer security foundations symposium (CSF). IEEE, 263–275.
  134. DarkneTZ: towards model privacy at the edge using trusted execution environments. In Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services. 161–174.
  135. Jeonghoon Mo and Jean Walrand. 2000. Fair end-to-end window-based congestion control. IEEE/ACM Transactions on networking 8, 5 (2000), 556–567.
  136. Agnostic federated learning. In International Conference on Machine Learning. PMLR, 4615–4625.
  137. Scotch: an efficient secure computation framework for secure aggregation. arXiv preprint arXiv:2201.07730 (2022).
  138. Fair learning with private demographic data. In International Conference on Machine Learning. PMLR, 7066–7075.
  139. Lokesh Nagalapatti and Ramasuri Narayanam. 2021. Game of Gradients: Mitigating Irrelevant Clients in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence 35, 10 (5 2021), 9046–9054. https://ojs.aaai.org/index.php/AAAI/article/view/17093
  140. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In 2019 IEEE Symposium on Security and Privacy (SP). 739–753. https://doi.org/10.1109/SP.2019.00065
  141. On First-Order Meta-Learning Algorithms. arXiv:1803.02999 [cs.LG]
  142. Takayuki Nishio and Ryo Yonetani. 2019. Client selection for federated learning with heterogeneous resources in mobile edge. In ICC 2019-2019 IEEE international conference on communications (ICC). IEEE, 1–7.
  143. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2 (2019), 13.
  144. Federated Learning Meets Fairness and Differential Privacy. In International Conference on Neural Information Processing. Springer, 692–699.
  145. Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis. arXiv preprint arXiv:2010.13356 (2020).
  146. Federating for Learning Group Fair Models. arXiv:2110.01999 [cs.LG]
  147. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016).
  148. Jaehyoung Park and Hyuk Lim. 2022. Privacy-Preserving Federated Learning Using Homomorphic Encryption. Applied Sciences 12, 2 (2022), 734.
  149. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Transactions on Information Forensics and Security 13 (2018), 1333–1345.
  150. Tran Thi Phuong et al. 2019. Privacy-preserving deep learning via weight transmission. IEEE Transactions on Information Forensics and Security 14, 11 (2019), 3003–3015.
  151. On Fairness and Calibration. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf
  152. Jia Qian and Lars Kai Hansen. 2020. What can we learn from gradients? (2020).
  153. GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning. ACM Trans. Intell. Syst. Technol. (12 2022). https://doi.org/10.1145/3510032
  154. {{\{{Updates-Leak}}\}}: Data Set Inference and Reconstruction Attacks in Online Learning. In 29th USENIX security symposium (USENIX Security 20). 1291–1308.
  155. Faircal: Fairness calibration for face verification. arXiv preprint arXiv:2106.03761 (2021).
  156. How unfair is private learning?. In Uncertainty in Artificial Intelligence. PMLR, 1738–1748.
  157. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics. 1668–1678.
  158. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems (2020).
  159. PRECODE-A Generic Model Extension to Prevent Deep Gradient Leakage. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1849–1858.
  160. Adi Shamir. 1979. How to share a secret. Commun. ACM 22, 11 (1979), 612–613.
  161. Dres-fl: Dropout-resilient secure federated learning for non-iid clients via secret data sharing. arXiv preprint arXiv:2210.02680 (2022).
  162. A Survey of Fairness-Aware Federated Learning. arXiv preprint arXiv:2111.01872 (2021).
  163. Is Homomorphic Encryption-Based Deep Learning Secure Enough? Sensors 21, 23 (2021), 7806.
  164. Overcoming forgetting in federated learning on non-iid data. arXiv preprint arXiv:1910.07796 (2019).
  165. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. 1310–1321.
  166. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3–18.
  167. Federated Reconstruction: Partially Local Federated Learning. arXiv:2102.03448 [cs.LG]
  168. Federated Multi-Task Learning. In NIPS.
  169. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory 2, 1 (2021), 479–489.
  170. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on computer and communications security. 587–601.
  171. Analyzing user-level privacy attack against federated learning. IEEE Journal on Selected Areas in Communications 38, 10 (2020), 2430–2444.
  172. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing. IEEE, 245–248.
  173. Towards a Better Global Loss Landscape of GANs. https://doi.org/10.48550/ARXIV.2011.04926
  174. Towards personalized federated learning. arXiv preprint arXiv:2103.00710 (2021).
  175. Florian Tramer and Dan Boneh. 2018. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287 (2018).
  176. Differentially private and fair deep learning: A lagrangian dual approach. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 9932–9939.
  177. Decision making with differential privacy under a fairness lens. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
  178. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM workshop on artificial intelligence and security. 1–11.
  179. Demystifying Membership Inference Attacks in Machine Learning as a Service. IEEE Transactions on Services Computing (2019), 1–1. https://doi.org/10.1109/TSC.2019.2897554
  180. Dp-sgd vs pate: Which has less disparate impact on model accuracy? arXiv preprint arXiv:2106.12576 (2021).
  181. Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017), 2053951717743530.
  182. Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In Proceedings of the International Workshop on Software Fairness (Gothenburg, Sweden) (FairWare ’18). Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3194770.3194776
  183. Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10 (2017), 3152676.
  184. Optimizing Federated Learning on Non-IID Data with Reinforcement Learning. In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications. 1698–1707. https://doi.org/10.1109/INFOCOM41043.2020.9155494
  185. Eavesdrop the Composition Proportion of Training Labels in Federated Learning. https://doi.org/10.48550/ARXIV.1910.06044
  186. Robust optimization for fairness with noisy protected groups. Advances in Neural Information Processing Systems 33 (2020), 5190–5203.
  187. Sapag: A self-adaptive privacy attack from gradients. arXiv preprint arXiv:2009.06228 (2020).
  188. Poisoning-assisted property inference attack against federated learning. IEEE Transactions on Dependable and Secure Computing (2022).
  189. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2512–2520.
  190. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security 15 (2020), 3454–3469.
  191. Wenqi Wei and Ling Liu. 2021. Gradient Leakage Attack Resilient Deep Learning. IEEE Transactions on Information Forensics and Security (2021).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Huiqiang Chen (7 papers)
  2. Tianqing Zhu (85 papers)
  3. Tao Zhang (481 papers)
  4. Wanlei Zhou (63 papers)
  5. Philip S. Yu (592 papers)
Citations (30)