Papers
Topics
Authors
Recent
Search
2000 character limit reached

Logits Poisoning Attack in Federated Distillation

Published 8 Jan 2024 in cs.LG | (2401.03685v1)

Abstract: Federated Distillation (FD) is a novel and promising distributed machine learning paradigm, where knowledge distillation is leveraged to facilitate a more efficient and flexible cross-device knowledge transfer in federated learning. By optimizing local models with knowledge distillation, FD circumvents the necessity of uploading large-scale model parameters to the central server, simultaneously preserving the raw data on local clients. Despite the growing popularity of FD, there is a noticeable gap in previous works concerning the exploration of poisoning attacks within this framework. This can lead to a scant understanding of the vulnerabilities to potential adversarial actions. To this end, we introduce FDLA, a poisoning attack method tailored for FD. FDLA manipulates logit communications in FD, aiming to significantly degrade model performance on clients through misleading the discrimination of private samples. Through extensive simulation experiments across a variety of datasets, attack scenarios, and FD configurations, we demonstrate that LPA effectively compromises client model accuracy, outperforming established baseline algorithms in this regard. Our findings underscore the critical need for robust defense mechanisms in FD settings to mitigate such adversarial threats.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1–19, 2019.
  2. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2):1–210, 2021.
  3. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479, 2018.
  4. Improving communication efficiency of federated distillation via accumulating local updates. arXiv preprint arXiv:2312.04166, 2023.
  5. Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. IEEE Transactions on Mobile Computing, pages 1–1, 2021.
  6. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019.
  7. Fedict: Federated multi-task distillation for multi-access edge computing. IEEE Transactions on Parallel and Distributed Systems, pages 1–16, 2023.
  8. Data poisoning attacks on federated machine learning. IEEE Internet of Things Journal, 9(13):11365–11375, 2021.
  9. Clean-label poisoning attacks on federated learning for iot. Expert Systems, 40(5):e13161, 2023.
  10. Backdoor attacks and defenses in federated learning: State-of-the-art, taxonomy, and future directions. IEEE Wireless Communications, 2022.
  11. Mpaf: Model poisoning attacks to federated learning based on fake clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3396–3404, 2022.
  12. Vaguegan: A gan-based data poisoning attack against federated learning systems. In 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pages 321–329, 2023.
  13. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  14. Spirit distillation: A model compression method with multi-domain knowledge transfer. In Knowledge Science, Engineering and Management: 14th International Conference, KSEM 2021, Tokyo, Japan, August 14–16, 2021, Proceedings, Part I, pages 553–565. Springer, 2021.
  15. Knowledge distillation for federated learning: a practical guide. arXiv preprint arXiv:2211.04742, 2022.
  16. Survey of knowledge distillation in federated edge learning. arXiv preprint arXiv:2301.05849, 2023.
  17. Exploring the distributed knowledge congruence in proxy-data-free federated distillation. arXiv preprint arXiv:2204.07028, 2022.
  18. Parameterized knowledge transfer for personalized federated learning. Advances in Neural Information Processing Systems, 34:10092–10104, 2021.
  19. Fedcache: A knowledge cache-driven federated learning architecture for personalized edge intelligence. arXiv preprint arXiv:2308.07816, 2023.
  20. Cfd: Communication-efficient federated distillation via soft-label quantization and delta coding. IEEE Transactions on Network Science and Engineering, 9(4):2025–2038, 2021.
  21. Agglomerative federated learning: Empowering larger model training via end-edge-cloud collaboration. arXiv preprint arXiv:2312.11489, 2023.
  22. Fedgems: Federated learning of larger server models via selective knowledge fusion. arXiv preprint arXiv:2110.11027, 2021.
  23. Learning multiple layers of features from tiny images. 2009.
  24. Reading digits in natural images with unsupervised feature learning. 2011.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.