Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
91 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
15 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks (2403.03149v2)

Published 5 Mar 2024 in cs.CR, cs.DC, and cs.LG

Abstract: Recent studies have revealed that federated learning (FL), once considered secure due to clients not sharing their private data with the server, is vulnerable to attacks such as client-side training data distribution inference, where a malicious client can recreate the victim's data. While various countermeasures exist, they are not practical, often assuming server access to some training data or knowledge of label distribution before the attack. In this work, we bridge the gap by proposing InferGuard, a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks. In our proposed InferGuard, the server first calculates the coordinate-wise median of all the model updates it receives. A client's model update is considered malicious if it significantly deviates from the computed median update. We conduct a thorough evaluation of our proposed InferGuard on five benchmark datasets and perform a comparison with ten baseline methods. The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks, even against strong adaptive attacks. Furthermore, our method substantially outperforms the baseline methods in various practical FL scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Alham Fikri Aji and Kenneth Heafield. 2017. Sparse communication for distributed gradient descent. In EMNLP.
  2. A little is enough: Circumventing defenses for distributed learning. In NeurIPS.
  3. signSGD: Compressed Optimisation for Non-Convex Problems. In ICML.
  4. Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS.
  5. Practical secure aggregation for privacy-preserving machine learning. In CCS.
  6. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In NDSS.
  7. Kenneth R Castleman. 1996. Digital image processing.
  8. Local model poisoning attacks to Byzantine-Robust federated learning. In USENIX Security Symposium.
  9. AFLGuard: Byzantine-robust Asynchronous Federated Learning. In ACSAC.
  10. Inverting gradients-how easy is it to break privacy in federated learning?. In NeurIPS.
  11. Generative adversarial nets. In NeurIPS.
  12. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In CCS.
  13. MNIST handwritten digit database. Available: http://yann. lecun. com/exdb/mnist.
  14. Communication-Efficient Learning of Deep Networks from Decentralized Data. In AISTATS.
  15. The Hidden Vulnerability of Distributed Learning in Byzantium. In ICML.
  16. Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint arXiv:1909.05125 (2019).
  17. Reading Digits in Natural Images with Unsupervised Feature Learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning.
  18. Jim Nilsson and Tomas Akenine-Möller. 2020. Understanding SSIM. In arXiv.
  19. Ferdinando Samaria and Andy Harter. 1994. Parameterisation of a stochastic model for human face identification. In Proceedings of 1994 IEEE Workshop on Applications of Computer Vision.
  20. Membership inference attacks against machine learning models. In IEEE symposium on security and privacy.
  21. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In IEEE International Joint Conference on Neural Networks.
  22. Image quality assessment: from error visibility to structural similarity. In IEEE transactions on image processing.
  23. Gradient-Leakage Resilient Federated Learning. In ICDCS.
  24. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms.
  25. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In ICML.
  26. Gradient obfuscation gives a false sense of security in federated learning. In USENIX Security Symposium.
  27. BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning. In USENIX ATC.
  28. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR.
  29. AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning. In The Web Conference 2023.
  30. Deep leakage from gradients. In NeurIPS.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets