Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MGIC: A Multi-Label Gradient Inversion Attack based on Canny Edge Detection on Federated Learning (2403.08284v1)

Published 13 Mar 2024 in cs.CV

Abstract: As a new distributed computing framework that can protect data privacy, federated learning (FL) has attracted more and more attention in recent years. It receives gradients from users to train the global model and releases the trained global model to working users. Nonetheless, the gradient inversion (GI) attack reflects the risk of privacy leakage in federated learning. Attackers only need to use gradients through hundreds of thousands of simple iterations to obtain relatively accurate private data stored on users' local devices. For this, some works propose simple but effective strategies to obtain user data under a single-label dataset. However, these strategies induce a satisfactory visual effect of the inversion image at the expense of higher time costs. Due to the semantic limitation of a single label, the image obtained by gradient inversion may have semantic errors. We present a novel gradient inversion strategy based on canny edge detection (MGIC) in both the multi-label and single-label datasets. To reduce semantic errors caused by a single label, we add new convolution layers' blocks in the trained model to obtain the image's multi-label. Through multi-label representation, serious semantic errors in inversion images are reduced. Then, we analyze the impact of parameters on the difficulty of input image reconstruction and discuss how image multi-subjects affect the inversion performance. Our proposed strategy has better visual inversion image results than the most widely used ones, saving more than 78% of time costs in the ImageNet dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
  2. T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair resource allocation in federated learning,” arXiv preprint arXiv:1905.10497, 2019.
  3. J. Chen, X. Pan, R. Monga, S. Bengio, and R. Jozefowicz, “Revisiting distributed synchronous sgd,” arXiv preprint arXiv:1604.00981, 2016.
  4. P. R. Ovi and A. Gangopadhyay, “A comprehensive study of gradient inversion attacks in federated learning and baseline defense strategies,” in 2023 57th Annual Conference on Information Sciences and Systems (CISS).   IEEE, 2023, pp. 1–6.
  5. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
  6. B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” arXiv preprint arXiv:2001.02610, 2020.
  7. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?” Advances in Neural Information Processing Systems, vol. 33, pp. 16 937–16 947, 2020.
  8. H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 337–16 346.
  9. X. Dong, H. Yin, J. M. Alvarez, J. Kautz, and P. Molchanov, “Deep neural networks are surprisingly reversible: a baseline for zero-shot inversion,” arXiv e-prints, pp. arXiv–2107, 2021.
  10. A. Hatamizadeh, H. Yin, P. Molchanov, A. Myronenko, W. Li, P. Dogra, A. Feng, M. Flores, J. Kautz, D. Xu et al., “Towards understanding the risks of gradient inversion in federated learning,” 2021.
  11. A. Hatamizadeh, H. Yin, P. Molchanov, A. Myronenko, W. Li, P. Dogra, A. Feng, M. G. Flores, J. Kautz, D. Xu et al., “Do gradient inversion attacks make federated learning unsafe?” IEEE Transactions on Medical Imaging, 2023.
  12. J. Zhu and M. Blaschko, “R-gap: Recursive gradient attack on privacy,” arXiv preprint arXiv:2010.07733, 2020.
  13. C. Chen and N. D. Campbell, “Understanding training-data leakage from gradients in neural networks for image classification,” arXiv preprint arXiv:2111.10178, 2021.
  14. L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020.
  15. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  16. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  17. B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do cifar-10 classifiers generalize to cifar-10?” arXiv preprint arXiv:1806.00451, 2018.
  18. S. Bucak, R. Jin, and A. Jain, “Multi-label multiple kernel learning by stochastic approximation: Application to visual object recognition,” Advances in Neural Information Processing Systems, vol. 23, 2010.
  19. M.-L. Zhang and Z.-H. Zhou, “A review on multi-label learning algorithms,” IEEE transactions on knowledge and data engineering, vol. 26, no. 8, pp. 1819–1837, 2013.
  20. T. Chen, M. Xu, X. Hui, H. Wu, and L. Lin, “Learning semantic-specific graph representation for multi-label image recognition,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 522–531.
  21. X. Qi, P. Zhu, Y. Wang, L. Zhang, J. Peng, M. Wu, J. Chen, X. Zhao, N. Zang, and P. T. Mathiopoulos, “Mlrsnet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 169, pp. 337–350, 2020.
  22. H. Wang, W. Liu, A. Bocchieri, and Y. Li, “Can multi-label classification networks know what they don’t know?” Advances in Neural Information Processing Systems, vol. 34, pp. 29 074–29 087, 2021.
  23. S. Yun, S. J. Oh, B. Heo, D. Han, J. Choe, and S. Chun, “Re-labeling imagenet: from single to multi-labels, from global to localized labels,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2340–2350.
  24. B.-B. Gao and H.-Y. Zhou, “Learning to discover multi-class attentional regions for multi-label image recognition,” IEEE Transactions on Image Processing, vol. 30, pp. 5920–5932, 2021.
  25. T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng, “Nus-wide: a real-world web image database from national university of singapore,” in Proceedings of the ACM international conference on image and video retrieval, 2009, pp. 1–9.
  26. J. Canny, “A computational approach to edge detection,” IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 679–698, 1986.
  27. L. Chen, I. W. Tsang, and D. Xu, “Laplacian embedded regression for scalable manifold regularization,” IEEE transactions on neural networks and learning systems, vol. 23, no. 6, pp. 902–915, 2012.
  28. G. Shrivakshan and C. Chandrasekar, “A comparison of various edge detection techniques used in image processing,” International Journal of Computer Science Issues (IJCSI), vol. 9, no. 5, p. 269, 2012.
  29. X. Yan and Y. Li, “A method of lane edge detection based on canny algorithm,” in 2017 Chinese Automation Congress (CAC).   IEEE, 2017, pp. 2120–2124.
  30. Y. Li and B. Liu, “Improved edge detection algorithm for canny operator,” in 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), vol. 10.   IEEE, 2022, pp. 1–5.
  31. Z. Xu, X. Baojie, and W. Guoxin, “Canny edge detection based on open cv,” in 2017 13th IEEE international conference on electronic measurement & instruments (ICEMI).   IEEE, 2017, pp. 53–56.
  32. W. Wei, L. Liu, M. Loper, K.-H. Chow, M. E. Gursoy, S. Truex, and Y. Wu, “A framework for evaluating gradient leakage attacks in federated learning,” arXiv preprint arXiv:2004.10397, 2020.
  33. W. Cheng, E. Hüllermeier, and K. J. Dembczynski, “Graded multilabel classification: The ordinal case,” in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 223–230.
  34. J. Xu, C. Hong, J. Huang, L. Y. Chen, and J. Decouchant, “Agic: Approximate gradient inversion attack on federated learning,” in 2022 41st International Symposium on Reliable Distributed Systems (SRDS).   IEEE, 2022, pp. 12–22.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Can Liu (40 papers)
  2. Jin Wang (356 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com