Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Temporal Gradient Inversion Attacks with Robust Optimization (2306.07883v1)

Published 13 Jun 2023 in cs.LG and cs.CR

Abstract: Federated Learning (FL) has emerged as a promising approach for collaborative model training without sharing private data. However, privacy concerns regarding information exchanged during FL have received significant research attention. Gradient Inversion Attacks (GIAs) have been proposed to reconstruct the private data retained by local clients from the exchanged gradients. While recovering private data, the data dimensions and the model complexity increase, which thwart data reconstruction by GIAs. Existing methods adopt prior knowledge about private data to overcome those challenges. In this paper, we first observe that GIAs with gradients from a single iteration fail to reconstruct private data due to insufficient dimensions of leaked gradients, complex model architectures, and invalid gradient information. We investigate a Temporal Gradient Inversion Attack with a Robust Optimization framework, called TGIAs-RO, which recovers private data without any prior knowledge by leveraging multiple temporal gradients. To eliminate the negative impacts of outliers, e.g., invalid gradients for collaborative optimization, robust statistics are proposed. Theoretical guarantees on the recovery performance and robustness of TGIAs-RO against invalid gradients are also provided. Extensive empirical results on MNIST, CIFAR10, ImageNet and Reuters 21578 datasets show that the proposed TGIAs-RO with 10 temporal gradients improves reconstruction performance compared to state-of-the-art methods, even for large batch sizes (up to 128), complex models like ResNet18, and large datasets like ImageNet (224*224 pixels). Furthermore, the proposed attack method inspires further exploration of privacy-preserving methods in the context of FL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  2. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, p. 12, 2019.
  3. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
  4. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?” Advances in Neural Information Processing Systems, vol. 33, pp. 16 937–16 947, 2020.
  5. H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16 337–16 346.
  6. J. Jeon, K. Lee, S. Oh, J. Ok et al., “Gradient inversion with generative image prior,” Advances in Neural Information Processing Systems, vol. 34, pp. 29 898–29 908, 2021.
  7. J. Zhu and M. B. Blaschko, “R-gap: Recursive gradient attack on privacy,” in International Conference on Learning Representations, 2020.
  8. D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in International Conference on Machine Learning.   PMLR, 2018, pp. 5650–5659.
  9. Y. Wang, J. Deng, D. Guo, C. Wang, X. Meng, H. Liu, C. Ding, and S. Rajasekaran, “Sapag: A self-adaptive privacy attack from gradients,” arXiv preprint arXiv:2009.06228, 2020.
  10. B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” arXiv preprint arXiv:2001.02610, 2020.
  11. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 308–318.
  12. J. Wangni, J. Wang, J. Liu, and T. Zhang, “Gradient sparsification for communication-efficient distributed optimization,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  13. J. Sun, A. Li, B. Wang, H. Yang, H. Li, and Y. Chen, “Soteria: Provable defense against privacy leakage in federated learning from representation perspective,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 9311–9319.
  14. P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  15. Z. Li, J. Zhang, L. Liu, and J. Liu, “Auditing privacy defenses in federated learning via generative gradient leakage,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 132–10 142.
  16. X. Jin, P.-Y. Chen, C.-Y. Hsu, C.-M. Yu, and T. Chen, “Cafe: Catastrophic data leakage in vertical federated learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 994–1006, 2021.
  17. X. Pan, M. Zhang, Y. Yan, J. Zhu, and M. Yang, “Exploring the security boundary of data reconstruction via neuron exclusivity analysis,” arXiv e-prints, 2020.
  18. J. Geng, Y. Mou, F. Li, Q. Li, O. Beyan, S. Decker, and C. Rong, “Towards general deep leakage in federated learning,” arXiv preprint arXiv:2110.09074, 2021.
  19. J. Xu, C. Hong, J. Huang, L. Y. Chen, and J. Decouchant, “Agic: Approximate gradient inversion attack on federated learning,” arXiv preprint arXiv:2204.13784, 2022.
  20. D. I. Dimitrov, M. Balunović, N. Konstantinov, and M. Vechev, “Data leakage in federated averaging,” arXiv preprint arXiv:2206.12395, 2022.
  21. K. Pillutla, S. M. Kakade, and Z. Harchaoui, “Robust aggregation for federated learning,” arXiv preprint arXiv:1912.13445, 2019.
  22. C. Xie, O. Koyejo, and I. Gupta, “Generalized byzantine-tolerant sgd,” Journal of Environmental Sciences (China) English Ed, 2018.
  23. M. Benning and M. Burger, “Modern regularization methods for inverse problems,” Acta Numerica, vol. 27, pp. 1–111, 2018.
  24. Y. Aono, T. Hayashi, L. Wang, S. Moriai et al., “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333–1345, 2017.
  25. J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
  26. S. Bubeck et al., “Convex optimization: Algorithms and complexity,” Foundations and Trends® in Machine Learning, vol. 8, no. 3-4, pp. 231–357, 2015.
  27. X. Li, W. Yang, S. Wang, and Z. Zhang, “Communication efficient decentralized training with multiple local updates,” arXiv preprint arXiv:1910.09126, vol. 5, 2019.
  28. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  29. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  30. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  31. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  32. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  33. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  34. Y. Zhang and B. Wallace, “A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification,” arXiv preprint arXiv:1510.03820, 2015.
  35. J. Deng, Y. Wang, J. Li, C. Wang, C. Shang, H. Liu, S. Rajasekaran, and C. Ding, “Tag: Gradient attack on transformer-based language models,” in EMNLP (Findings), 2021.
  36. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations, 2018.
  37. Y. Huang, Z. Song, K. Li, and S. Arora, “Instahide: Instance-hiding schemes for private distributed learning,” in International conference on machine learning.   PMLR, 2020, pp. 4507–4518.
  38. H. Gu, J. Luo, Y. Kang, L. Fan, and Q. Yang, “Fedpass: Privacy-preserving vertical federated deep learning with adaptive obfuscation,” arXiv preprint arXiv:2301.12623, 2023.
  39. R. Guerraoui, S. Rouault et al., “The hidden vulnerability of distributed learning in byzantium,” in International Conference on Machine Learning.   PMLR, 2018, pp. 3521–3530.
  40. D. Data and S. Diggavi, “Byzantine-resilient sgd in high dimensions on heterogeneous data,” in 2021 IEEE International Symposium on Information Theory (ISIT).   IEEE, 2021, pp. 2310–2315.
  41. C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, “Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization,” ACM Transactions on mathematical software (TOMS), vol. 23, no. 4, pp. 550–560, 1997.
  42. Y. Liu, Y. Zhu, and J. James, “Resource-constrained federated learning with heterogeneous data: Formulation and analysis,” IEEE Transactions on Network Science and Engineering, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Bowen Li (166 papers)
  2. Hanlin Gu (33 papers)
  3. Ruoxin Chen (9 papers)
  4. Jie Li (553 papers)
  5. Chentao Wu (15 papers)
  6. Na Ruan (11 papers)
  7. Xueming Si (1 paper)
  8. Lixin Fan (77 papers)

Summary

We haven't generated a summary for this paper yet.