Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data (2404.06776v1)

Published 10 Apr 2024 in cs.LG, cs.AI, and cs.CV

Abstract: Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training on devices in edge networks. However, challenges arise due to vulnerability to adversarial examples (AEs) and the non-independent and identically distributed (non-IID) nature of data distribution among devices, hindering the deployment of adversarially robust and accurate learning models at the edge. While adversarial training (AT) is commonly acknowledged as an effective defense strategy against adversarial attacks in centralized training, we shed light on the adverse effects of directly applying AT in FL that can severely compromise accuracy, especially in non-IID challenges. Given this limitation, this paper proposes FatCC, which incorporates local logit \underline{C}alibration and global feature \underline{C}ontrast into the vanilla federated adversarial training (\underline{FAT}) process from both logit and feature perspectives. This approach can effectively enhance the federated system's robust accuracy (RA) and clean accuracy (CA). First, we propose logit calibration, where the logits are calibrated during local adversarial updates, thereby improving adversarial robustness. Second, FatCC introduces feature contrast, which involves a global alignment term that aligns each local representation with unbiased global features, thus further enhancing robustness and accuracy in federated adversarial environments. Extensive experiments across multiple datasets demonstrate that FatCC achieves comparable or superior performance gains in both CA and RA compared to other baselines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (71)
  1. Z. Xiong, Y. Zhang, D. Niyato, P. Wang, and Z. Han, “When mobile blockchain meets edge computing,” IEEE Communications Magazine, vol. 56, no. 8, pp. 33–39, 2018.
  2. H. Wu and P. Wang, “Node selection toward faster convergence for federated learning on non-iid data,” IEEE Transactions on Network Science and Engineering, vol. 9, no. 5, pp. 3099–3111, 2022.
  3. E. Ahmed, I. Yaqoob, I. A. T. Hashem, I. Khan, A. I. A. Ahmed, M. Imran, and A. V. Vasilakos, “The role of big data analytics in internet of things,” Computer Networks, vol. 129, pp. 459–471, 2017.
  4. A. Adhikary, M. S. Munir, A. D. Raha, Y. Qiao, and C. S. Hong, “Artificial intelligence framework for target oriented integrated sensing and communication in holographic mimo,” in NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, pp. 1–7, IEEE, 2023.
  5. C. Wang, Z. Yuan, P. Zhou, Z. Xu, R. Li, and D. O. Wu, “The security and privacy of mobile edge computing: An artificial intelligence perspective,” IEEE Internet of Things Journal, 2023.
  6. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics, pp. 1273–1282, PMLR, 2017.
  7. Y. Qiao, M. S. Munir, A. Adhikary, H. Q. Le, A. D. Raha, C. Zhang, and C. S. Hong, “Mp-fedcl: Multiprototype federated contrastive learning for edge intelligence,” IEEE Internet of Things journal, 2023.
  8. G. Zizzo, A. Rawat, M. Sinn, and B. Buesser, “Fat: Federated adversarial training,” arXiv preprint arXiv:2012.01791, 2020.
  9. J. Hong, H. Wang, Z. Wang, and J. Zhou, “Federated robustness propagation: Sharing adversarial robustness in federated learning,” arXiv preprint arXiv:2106.10196, vol. 1, 2021.
  10. L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and S. Y. Philip, “Privacy and robustness in federated learning: Attacks and defenses,” IEEE transactions on neural networks and learning systems, 2022.
  11. Y. Qiao, A. Adhikary, C. Zhang, and C. S. Hong, “Towards robust federated learning via logits calibration on non-iid data,” in NOMS 2024-2024 IEEE/IFIP Network Operations and Management Symposium (in press), IEEE, 2024.
  12. G. Rossolini, F. Nesti, G. D’Amico, S. Nair, A. Biondi, and G. Buttazzo, “On the real-world adversarial robustness of real-time semantic segmentation models for autonomous driving,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  13. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  14. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  15. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  16. X. Li, Z. Song, and J. Yang, “Federated adversarial learning: A framework with convergence analysis,” in International Conference on Machine Learning, pp. 19932–19959, PMLR, 2023.
  17. J. Hong, H. Wang, Z. Wang, and J. Zhou, “Federated robustness propagation: sharing adversarial robustness in heterogeneous federated learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 7893–7901, 2023.
  18. D. Shah, P. Dube, S. Chakraborty, and A. Verma, “Adversarial training in communication constrained federated learning,” arXiv preprint arXiv:2103.01319, 2021.
  19. S. Luo, D. Zhu, Z. Li, and C. Wu, “Ensemble federated adversarial training with non-iid data,” arXiv preprint arXiv:2110.14814, 2021.
  20. A. K. Menon, S. Jayasumana, A. S. Rawat, H. Jain, A. Veit, and S. Kumar, “Long-tail learning via logit adjustment,” arXiv preprint arXiv:2007.07314, 2020.
  21. C. Chen, Y. Liu, X. Ma, and L. Lyu, “Calfat: Calibrated federated adversarial training with label skewness,” arXiv preprint arXiv:2205.14926, 2022.
  22. J. Zhang, Z. Li, B. Li, J. Xu, S. Wu, S. Ding, and C. Wu, “Federated learning with label distribution skew via logits calibration,” in International Conference on Machine Learning, pp. 26311–26329, PMLR, 2022.
  23. W. Huang, G. Wan, M. Ye, and B. Du, “Federated graph semantic and structural learning,” in Proc. Int. Joint Conf. Artif. Intell, pp. 3830–3838, 2023.
  24. Y. Tan, G. Long, L. Liu, T. Zhou, Q. Lu, J. Jiang, and C. Zhang, “Fedproto: Federated prototype learning across heterogeneous clients,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 8432–8440, 2022.
  25. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  26. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  27. A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009.
  28. J. Wen, Z. Zhang, Y. Lan, Z. Cui, J. Cai, and W. Zhang, “A survey on federated learning: challenges and applications,” International Journal of Machine Learning and Cybernetics, vol. 14, no. 2, pp. 513–535, 2023.
  29. M. Ye, X. Fang, B. Du, P. C. Yuen, and D. Tao, “Heterogeneous federated learning: State-of-the-art and research challenges,” ACM Computing Surveys, vol. 56, no. 3, pp. 1–44, 2023.
  30. Z. Chai, Y. Chen, A. Anwar, L. Zhao, Y. Cheng, and H. Rangwala, “Fedat: a high-performance and communication-efficient federated learning system with asynchronous tiers,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–16, 2021.
  31. J. Park, D.-J. Han, M. Choi, and J. Moon, “Handling both stragglers and adversaries for robust federated learning,” in ICML 2021 Workshop on Federated Learning for User Privacy and Data Confidentiality, ICML Board, 2021.
  32. Y. Qiao, M. S. Munir, A. Adhikary, A. D. Raha, and C. S. Hong, “Cdfed: Contribution-based dynamic federated learning for managing system and statistical heterogeneity,” in NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, IEEE, 2023.
  33. H. Q. Le, M. N. Nguyen, C. M. Thwal, Y. Qiao, C. Zhang, and C. S. Hong, “Fedmekt: Distillation-based embedding knowledge transfer for multimodal federated learning,” arXiv preprint arXiv:2307.13214, 2023.
  34. Z. Zhao, J. Wang, W. Hong, T. Q. Quek, Z. Ding, and M. Peng, “Ensemble federated learning with non-iid data in wireless networks,” IEEE Transactions on Wireless Communications, 2023.
  35. Y. Qiao, C. Zhang, H. Q. Le, A. D. Raha, A. Adhikary, and C. S. Hong, “Knowledge distillation in federated learning: Where and how to distill?,” in 2023 24st Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 18–23, IEEE, 2023.
  36. J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” Advances in neural information processing systems, vol. 33, pp. 7611–7623, 2020.
  37. C. T Dinh, N. Tran, and J. Nguyen, “Personalized federated learning with moreau envelopes,” Advances in Neural Information Processing Systems, vol. 33, pp. 21394–21405, 2020.
  38. S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in International Conference on Machine Learning, pp. 5132–5143, PMLR, 2020.
  39. X. Yao, T. Huang, C. Wu, R.-X. Zhang, and L. Sun, “Federated learning with additional mechanisms on clients to reduce communication costs,” arXiv preprint arXiv:1908.05891, 2019.
  40. Q. Li, B. He, and D. Song, “Model-contrastive federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10713–10722, 2021.
  41. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
  42. R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol. 2, pp. 1735–1742, IEEE, 2006.
  43. X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, “Self-supervised learning: Generative or contrastive,” IEEE transactions on knowledge and data engineering, vol. 35, no. 1, pp. 857–876, 2021.
  44. Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3733–3742, 2018.
  45. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.
  46. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, pp. 1597–1607, PMLR, 2020.
  47. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
  48. H. Kuang, Y. Zhu, Z. Zhang, X. Li, J. Tighe, S. Schwertfeger, C. Stachniss, and M. Li, “Video contrastive learning with global context,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3195–3204, 2021.
  49. J. Spijkervet and J. A. Burgoyne, “Contrastive learning of musical representations,” arXiv preprint arXiv:2103.09410, 2021.
  50. A. Saeed, D. Grangier, and N. Zeghidour, “Contrastive learning of general-purpose audio representations,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3875–3879, IEEE, 2021.
  51. Y. Tan, G. Long, J. Ma, L. Liu, T. Zhou, and J. Jiang, “Federated learning from pre-trained models: A contrastive learning approach,” arXiv preprint arXiv:2209.10083, 2022.
  52. X. Mu, Y. Shen, K. Cheng, X. Geng, J. Fu, T. Zhang, and Z. Zhang, “Fedproc: Prototypical contrastive federated learning on non-iid data,” Future Generation Computer Systems, vol. 143, pp. 93–104, 2023.
  53. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  54. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
  55. Y. Qiao, C. Zhang, T. Kang, D. Kim, S. Tariq, C. Zhang, and C. S. Hong, “Robustness of sam: Segment anything under corruptions and beyond,” arXiv preprint arXiv:2306.07713, 2023.
  56. Y. Wang, T. Sun, S. Li, X. Yuan, W. Ni, E. Hossain, and H. V. Poor, “Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey,” IEEE Communications Surveys & Tutorials, 2023.
  57. A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security, pp. 99–112, Chapman and Hall/CRC, 2018.
  58. M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: a query-efficient black-box adversarial attack via random search,” in European conference on computer vision, pp. 484–501, Springer, 2020.
  59. N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp), pp. 39–57, Ieee, 2017.
  60. F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” in International conference on machine learning, pp. 2206–2216, PMLR, 2020.
  61. S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773, 2017.
  62. N. Akhtar, A. Mian, N. Kardan, and M. Shah, “Advances in adversarial attacks and defenses in computer vision: A survey,” IEEE Access, vol. 9, pp. 155161–155196, 2021.
  63. H. Zhang, H. Chen, Z. Song, D. Boning, I. S. Dhillon, and C.-J. Hsieh, “The limitations of adversarial training and the blind-spot attack,” arXiv preprint arXiv:1901.04684, 2019.
  64. T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” arXiv preprint arXiv:2102.01356, 2021.
  65. A. Athalye and N. Carlini, “On the robustness of the cvpr 2018 white-box adversarial example defenses,” arXiv preprint arXiv:1804.03286, 2018.
  66. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017.
  67. P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18661–18673, 2020.
  68. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  69. H. Kannan, A. Kurakin, and I. Goodfellow, “Adversarial logit pairing,” arXiv preprint arXiv:1803.06373, 2018.
  70. H. Zhang, Y. Yu, J. Jiao, E. Xing, L. El Ghaoui, and M. Jordan, “Theoretically principled trade-off between robustness and accuracy,” in International conference on machine learning, pp. 7472–7482, PMLR, 2019.
  71. M. Yurochkin, M. Agarwal, S. Ghosh, K. Greenewald, N. Hoang, and Y. Khazaeni, “Bayesian nonparametric federated learning of neural networks,” in International conference on machine learning, pp. 7252–7261, PMLR, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yu Qiao (563 papers)
  2. Chaoning Zhang (66 papers)
  3. Apurba Adhikary (15 papers)
  4. Choong Seon Hong (165 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.