Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advances and Open Challenges in Federated Foundation Models (2404.15381v4)

Published 23 Apr 2024 in cs.LG and cs.AI

Abstract: The integration of Foundation Models (FMs) with Federated Learning (FL) presents a transformative paradigm in AI. This integration offers enhanced capabilities, while addressing concerns of privacy, data decentralization and computational efficiency. This paper provides a comprehensive survey of the emerging field of Federated Foundation Models (FedFM), elucidating their synergistic relationship and exploring novel methodologies, challenges, and future directions that the FL research field needs to focus on in order to thrive in the age of FMs. A systematic multi-tiered taxonomy is proposed, categorizing existing FedFM approaches for model training, aggregation, trustworthiness, and incentivization. Key challenges, including how to enable FL to deal with high complexity of computational demands, privacy considerations, contribution evaluation, and communication efficiency, are thoroughly discussed. Moreover, this paper explores the intricate challenges of communication, scalability and security inherent in training/fine-tuning FMs via FL. It highlights the potential of quantum computing to revolutionize the processes of training, inference, optimization and security. This survey also introduces the implementation requirement of FedFM and some practical FedFM applications. It highlights lessons learned with a clear understanding of our findings for FedFM. Finally, this survey not only provides insights into the current state and challenges of FedFM, but also offers a blueprint for future research directions, emphasizing the need for developing trustworthy solutions. It serves as a foundational guide for researchers and practitioners interested in contributing to this interdisciplinary and rapidly advancing field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (264)
  1. Y. Yuan, “On the power of foundation models,” in International Conference on Machine Learning.   PMLR, 2023, pp. 40 519–40 530.
  2. R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., “On the opportunities and risks of foundation models,” arXiv preprint arXiv:2108.07258, 2021.
  3. C. Zhou, Q. Li, C. Li, J. Yu, Y. Liu, G. Wang, K. Zhang, C. Ji, Q. Yan, L. He et al., “A comprehensive survey on pretrained foundation models: A history from bert to chatgpt,” arXiv preprint arXiv:2302.09419, 2023.
  4. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  5. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  6. OpenAI, “Gpt-4 technical report,” ArXiv, vol. abs/2303.08774, 2023. [Online]. Available: https://arxiv.org/abs/2303.08774
  7. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  8. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., “Palm: Scaling language modeling with pathways,” arXiv preprint arXiv:2204.02311, 2022.
  9. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  10. R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 10 684–10 695.
  11. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in International Conference on Machine Learning.   PMLR, 2021, pp. 8821–8831.
  12. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning.   PMLR, 2021, pp. 8748–8763.
  13. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., “Advances and open problems in federated learning,” Foundations and trends® in machine learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  14. C. Chen, X. Feng, J. Zhou, J. Yin, and X. Zheng, “Federated large language model: A position paper,” arXiv preprint arXiv:2307.08925, 2023.
  15. W. Zhuang, C. Chen, and L. Lyu, “When foundation model meets federated learning: Motivations, challenges, and future directions,” arXiv preprint arXiv:2306.15546, 2023.
  16. S. Yu, J. P. Muñoz, and A. Jannesari, “Federated foundation models: Privacy-preserving and collaborative learning for large models,” arXiv preprint arXiv:2305.11414, 2023.
  17. Y. Kang, T. Fan, H. Gu, L. Fan, and Q. Yang, “Grounding foundation models through federated transfer learning: A general framework,” arXiv preprint arXiv:2311.17431, 2023.
  18. H. Woisetschläger, A. Isenko, S. Wang, R. Mayer, and H.-A. Jacobsen, “A survey on efficient federated learning methods for foundation model training,” arXiv preprint arXiv:2401.04472, 2024.
  19. X. Li and J. Wang, “Position paper: Assessing robustness, privacy, and fairness in federated learning integrated with foundation models,” arXiv preprint arXiv:2402.01857, 2024.
  20. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
  21. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  22. P. Villalobos, J. Sevilla, L. Heim, T. Besiroglu, M. Hobbhahn, and A. Ho, “Will we run out of data? an analysis of the limits of scaling datasets in machine learning,” arXiv preprint arXiv:2211.04325, 2022.
  23. D. C. Nguyen, Q.-V. Pham, P. N. Pathirana, M. Ding, A. Seneviratne, Z. Lin, O. Dobre, and W.-J. Hwang, “Federated learning for smart healthcare: A survey,” ACM Computing Surveys (Csur), vol. 55, no. 3, pp. 1–37, 2022.
  24. G. Long, Y. Tan, J. Jiang, and C. Zhang, “Federated learning for open banking,” in Federated Learning: Privacy and Incentive.   Springer, 2020, pp. 240–254.
  25. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  26. A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  27. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
  28. S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in International conference on machine learning.   PMLR, 2020, pp. 5132–5143.
  29. A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 2021–2031.
  30. E. Diao, J. Ding, and V. Tarokh, “Heterofl: Computation and communication efficient federated learning for heterogeneous clients,” arXiv preprint arXiv:2010.01264, 2020.
  31. A. Li, J. Sun, B. Wang, L. Duan, S. Li, Y. Chen, and H. Li, “Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets,” arXiv preprint arXiv:2008.03371, 2020.
  32. S. Horvath, S. Laskaridis, M. Almeida, I. Leontiadis, S. Venieris, and N. Lane, “Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 876–12 889, 2021.
  33. H. Yang, “H-fl: A hierarchical communication-efficient and privacy-protected architecture for federated learning,” arXiv preprint arXiv:2106.00275, 2021.
  34. Y. Jiang, S. Wang, V. Valls, B. J. Ko, W.-H. Lee, K. K. Leung, and L. Tassiulas, “Model pruning enables efficient federated learning on edge devices,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  35. B. Isik, F. Pase, D. Gunduz, T. Weissman, and M. Zorzi, “Sparse random networks for communication-efficient federated learning,” arXiv preprint arXiv:2209.15328, 2022.
  36. H. Huang, L. Zhang, C. Sun, R. Fang, X. Yuan, and D. Wu, “Distributed pruning towards tiny neural networks in federated learning,” in 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS).   IEEE, 2023, pp. 190–201.
  37. Z. Li, H. Zhao, B. Li, and Y. Chi, “Soteriafl: A unified framework for private federated learning with communication compression,” Advances in Neural Information Processing Systems, vol. 35, pp. 4285–4300, 2022.
  38. H. Zhao, W. Du, F. Li, P. Li, and G. Liu, “Fedprompt: Communication-efficient and privacy-preserving prompt tuning in federated learning,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  39. Y. Tian, Y. Wan, L. Lyu, D. Yao, H. Jin, and L. Sun, “Fedbert: When federated learning meets pre-training,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1–26, 2022.
  40. W. Lu, X. Hu, J. Wang, and X. Xie, “Fedclip: Fast generalization and personalization for clip in federated learning,” arXiv preprint arXiv:2302.13485, 2023.
  41. G. Sun, M. Mendieta, T. Yang, and C. Chen, “Exploring parameter-efficient fine-tuning for improving communication efficiency in federated learning,” arXiv preprint arXiv:2210.01708, 2022.
  42. Z. Zhang, Y. Yang, Y. Dai, Q. Wang, Y. Yu, L. Qu, and Z. Xu, “Fedpetuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models,” in Annual Meeting of the Association of Computational Linguistics 2023.   Association for Computational Linguistics (ACL), 2023, pp. 9963–9977.
  43. S. Babakniya, A. R. Elkordy, Y. H. Ezzeldin, Q. Liu, K.-B. Song, M. El-Khamy, and S. Avestimehr, “Slora: Federated parameter efficient fine-tuning of language models,” arXiv preprint arXiv:2308.06522, 2023.
  44. Y. Chen, Z. Chen, P. Wu, and H. Yu, “Fedobd: Opportunistic block dropout for efficiently training large-scale neural networks through federated learning,” arXiv preprint arXiv:2208.05174, 2022.
  45. J. Zhang, S. Vahidian, M. Kuo, C. Li, R. Zhang, G. Wang, and Y. Chen, “Towards building the federated gpt: Federated instruction tuning,” arXiv preprint arXiv:2305.05644, 2023.
  46. M. Xu, Y. Wu, D. Cai, X. Li, and S. Wang, “Fwdllm: Efficient fedllm using forward gradient,” arXiv preprint arXiv:2308.13894, 2023.
  47. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  48. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding with unsupervised learning,” Open AI, 2018.
  49. H. Liu, C. Li, Q. Wu, and Y. J. Lee, “Visual instruction tuning,” Advances in neural information processing systems, vol. 36, 2024.
  50. B. Liu, N. Lv, Y. Guo, and Y. Li, “Recent advances on federated learning: A systematic survey,” arXiv preprint arXiv:2301.01299, 2023.
  51. H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni, “Federated learning with matched averaging,” arXiv preprint arXiv:2002.06440, 2020.
  52. R. Ye, W. Wang, J. Chai, D. Li, Z. Li, Y. Xu, Y. Du, Y. Wang, and S. Chen, “Openfedllm: Training large language models on decentralized private data via federated learning,” arXiv preprint arXiv:2402.06954, 2024.
  53. M. Wortsman, G. Ilharco, S. Y. Gadre, R. Roelofs, R. Gontijo-Lopes, A. S. Morcos, H. Namkoong, A. Farhadi, Y. Carmon, S. Kornblith et al., “Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time,” in International Conference on Machine Learning.   PMLR, 2022, pp. 23 965–23 998.
  54. A. Rame, K. Ahuja, J. Zhang, M. Cord, L. Bottou, and D. Lopez-Paz, “Model ratatouille: Recycling diverse models for out-of-distribution generalization,” in Proceedings of the 40th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202.   PMLR, 23–29 Jul 2023, pp. 28 656–28 679. [Online]. Available: https://proceedings.mlr.press/v202/rame23a.html
  55. N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat et al., “Glam: Efficient scaling of language models with mixture-of-experts,” in International Conference on Machine Learning.   PMLR, 2022, pp. 5547–5569.
  56. B. Zoph, I. Bello, S. Kumar, N. Du, Y. Huang, J. Dean, N. Shazeer, and W. Fedus, “St-moe: Designing stable and transferable sparse expert models,” arXiv preprint arXiv:2202.08906, 2022.
  57. F. Xue, Z. Zheng, Y. Fu, J. Ni, Z. Zheng, W. Zhou, and Y. You, “Openmoe: An early effort on open mixture-of-experts language models,” arXiv preprint arXiv:2402.01739, 2024.
  58. N. Ding, Y. Qin, G. Yang, F. Wei, Z. Yang, Y. Su, S. Hu, Y. Chen, C.-M. Chan, W. Chen et al., “Parameter-efficient fine-tuning of large-scale pre-trained language models,” Nature Machine Intelligence, vol. 5, no. 3, pp. 220–235, 2023.
  59. N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-efficient transfer learning for nlp,” in International Conference on Machine Learning.   PMLR, 2019, pp. 2790–2799.
  60. E. B. Zaken, S. Ravfogel, and Y. Goldberg, “Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models,” arXiv preprint arXiv:2106.10199, 2021.
  61. E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021.
  62. B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–40, 2023.
  63. W. Yuan, G. Neubig, and P. Liu, “Bartscore: Evaluating generated text as text generation,” Advances in Neural Information Processing Systems, vol. 34, pp. 27 263–27 277, 2021.
  64. T. L. Scao and A. M. Rush, “How many data points is a prompt worth?” arXiv preprint arXiv:2103.08493, 2021.
  65. S. Zhang, L. Dong, X. Li, S. Zhang, X. Sun, S. Wang, J. Li, R. Hu, T. Zhang, F. Wu et al., “Instruction tuning for large language models: A survey,” arXiv preprint arXiv:2308.10792, 2023.
  66. Y. Cheng, D. Wang, P. Zhou, and T. Zhang, “A survey of model compression and acceleration for deep neural networks,” arXiv preprint arXiv:1710.09282, 2017.
  67. V. Klema and A. Laub, “The singular value decomposition: Its computation and some applications,” IEEE Transactions on automatic control, vol. 25, no. 2, pp. 164–176, 1980.
  68. L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and P. S. Yu, “Privacy and robustness in federated learning: Attacks and defenses,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  69. B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012.   icml.cc / Omnipress, 2012.
  70. X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” CoRR, vol. abs/1712.05526, 2017. [Online]. Available: http://arxiv.org/abs/1712.05526
  71. M. Fang, G. Yang, N. Z. Gong, and J. Liu, “Poisoning attacks to graph-based recommender systems,” in Proceedings of the 34th Annual Computer Security Applications Conference, ACSAC 2018, San Juan, PR, USA, December 03-07, 2018.   ACM, 2018, pp. 381–392.
  72. M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA.   IEEE Computer Society, 2018, pp. 19–35.
  73. A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, 2018, pp. 6106–6116.
  74. T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” CoRR, vol. abs/1708.06733, 2017.
  75. Y. He, Z. Shen, C. Xia, J. Hua, W. Tong, and S. Zhong, “SGBA: A stealthy scapegoat backdoor attack against deep neural networks,” Comput. Secur., vol. 136, p. 103523, 2024.
  76. R. Hou, T. Huang, H. Yan, L. Ke, and W. Tang, “A stealthy and robust backdoor attack via frequency domain transform,” World Wide Web (WWW), vol. 26, no. 5, pp. 2767–2783, 2023.
  77. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], ser. Proceedings of Machine Learning Research, vol. 108.   PMLR, 2020, pp. 2938–2948.
  78. X. Lyu, Y. Han, W. Wang, J. Liu, B. Wang, J. Liu, and X. Zhang, “Poisoning with cerberus: Stealthy and colluded backdoor attack against federated learning,” in Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023.   AAAI Press, 2023, pp. 9020–9028.
  79. K. Mei, Z. Li, Z. Wang, Y. Zhang, and S. Ma, “NOTABLE: transferable backdoor attacks against prompt-based NLP models,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023.   Association for Computational Linguistics, 2023, pp. 15 551–15 565.
  80. S. Zhao, J. Wen, A. T. Luu, J. Zhao, and J. Fu, “Prompt as triggers for backdoor attack: Examining the vulnerability in language models,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023.   Association for Computational Linguistics, 2023, pp. 12 303–12 317.
  81. W. Du, Y. Zhao, B. Li, G. Liu, and S. Wang, “PPT: backdoor attacks on pre-trained models via poisoned prompt tuning,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022.   ijcai.org, 2022, pp. 680–686.
  82. B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in 2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19-23, 2019.   IEEE, 2019, pp. 707–723.
  83. S. Feng, G. Tao, S. Cheng, G. Shen, X. Xu, Y. Liu, K. Zhang, S. Ma, and X. Zhang, “Detecting backdoors in pre-trained encoders,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023.   IEEE, 2023, pp. 16 352–16 362.
  84. P. Blanchard, E. M. E. Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, 2017, pp. 119–129.
  85. D. Yin, Y. Chen, K. Ramchandran, and P. L. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, ser. Proceedings of Machine Learning Research, vol. 80.   PMLR, 2018, pp. 5636–5645.
  86. C. Xie, S. Koyejo, and I. Gupta, “Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance,” in Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, ser. Proceedings of Machine Learning Research, vol. 97.   PMLR, 2019, pp. 6893–6901.
  87. ——, “Zeno++: Robust fully asynchronous SGD,” in Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, ser. Proceedings of Machine Learning Research, vol. 119.   PMLR, 2020, pp. 10 495–10 503.
  88. M. Fang, X. Cao, J. Jia, and N. Z. Gong, “Local model poisoning attacks to byzantine-robust federated learning,” in 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020.   USENIX Association, 2020, pp. 1605–1622.
  89. M. Hao, H. Li, G. Xu, H. Chen, and T. Zhang, “Efficient, private and robust federated learning,” in ACSAC ’21: Annual Computer Security Applications Conference, Virtual Event, USA, December 6 - 10, 2021.   ACM, 2021, pp. 45–60.
  90. X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” in 28th Annual Network and Distributed System Security Symposium, NDSS 2021, virtually, February 21-25, 2021.   The Internet Society, 2021.
  91. S. Prakash and A. S. Avestimehr, “Mitigating byzantine attacks in federated learning,” arXiv preprint arXiv:2010.07541, 2020.
  92. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP).   IEEE, 2017, pp. 3–18.
  93. L. Liu, Y. Wang, G. Liu, K. Peng, and C. Wang, “Membership inference attacks against machine learning models via prediction sensitivity,” IEEE Trans. Dependable Secur. Comput., vol. 20, no. 3, pp. 2341–2347, 2023.
  94. H. Yan, S. Li, Y. Wang, Y. Zhang, K. Sharif, H. Hu, and Y. Li, “Membership inference attacks against deep learning models via logits distribution,” IEEE Trans. Dependable Secur. Comput., vol. 20, no. 5, pp. 3799–3808, 2023.
  95. J. Duan, F. Kong, S. Wang, X. Shi, and K. Xu, “Are diffusion models vulnerable to membership inference attacks?” in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202.   PMLR, 2023, pp. 8717–8730.
  96. W. Fu, H. Wang, C. Gao, G. Liu, Y. Li, and T. Jiang, “Practical membership inference attacks against fine-tuned large language models via self-prompt calibration,” CoRR, vol. abs/2311.06062, 2023.
  97. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 14 747–14 756.
  98. B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” CoRR, vol. abs/2001.02610, 2020.
  99. X. Yuan, K. Chen, J. Zhang, W. Zhang, N. Yu, and Y. Zhang, “Pseudo label-guided model inversion attack via conditional generative adversarial network,” in Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023.   AAAI Press, 2023, pp. 3349–3357.
  100. Y. Wu, X. Li, Y. Liu, P. Zhou, and L. Sun, “Jailbreaking GPT-4V via self-adversarial attacks with system prompts,” CoRR, vol. abs/2311.09127, 2023.
  101. Z. X. Yong, C. Menghini, and S. H. Bach, “Low-resource languages jailbreak GPT-4,” CoRR, vol. abs/2310.02446, 2023.
  102. G. Deng, Y. Liu, Y. Li, K. Wang, Y. Zhang, Z. Li, H. Wang, T. Zhang, and Y. Liu, “Jailbreaker: Automated jailbreak across multiple large language model chatbots,” CoRR, vol. abs/2307.08715, 2023.
  103. R. Xue, K. Xue, B. Zhu, X. Luo, T. Zhang, Q. Sun, and J. Lu, “Differentially private federated learning with an adaptive noise mechanism,” IEEE Trans. Inf. Forensics Secur., vol. 19, pp. 74–87, 2024.
  104. S. D. Okegbile, J. Cai, H. Zheng, J. Chen, and C. Yi, “Differentially private federated multi-task learning framework for enhancing human-to-virtual connectivity in human digital twin,” IEEE J. Sel. Areas Commun., vol. 41, no. 11, pp. 3533–3547, 2023.
  105. X. Lin, J. Wu, J. Li, C. Sang, S. Hu, and M. J. Deen, “Heterogeneous differential-private federated learning: Trading privacy for utility truthfully,” IEEE Trans. Dependable Secur. Comput., vol. 20, no. 6, pp. 5113–5129, 2023.
  106. P. Sun, H. Che, Z. Wang, Y. Wang, T. Wang, L. Wu, and H. Shao, “Pain-fl: Personalized privacy-preserving incentive for federated learning,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3805–3820, 2021.
  107. P. Sun, X. Chen, G. Liao, and J. Huang, “A profit-maximizing model marketplace with differentially private federated learning,” in IEEE INFOCOM 2022-IEEE Conference on Computer Communications.   IEEE, 2022, pp. 1439–1448.
  108. R. Hu, Y. Gong, and Y. Guo, “Federated learning with sparsification-amplified privacy and adaptive optimization,” arXiv preprint arXiv:2008.01558, 2020.
  109. R. Hu, Y. Guo, and Y. Gong, “Federated learning with sparsified model perturbation: Improving accuracy under client-level differential privacy,” IEEE Transactions on Mobile Computing, 2023.
  110. B. Wang, F. Wu, Y. Long, L. Rimanic, C. Zhang, and B. Li, “Datalens: Scalable privacy preserving training via gradient compression and aggregation,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2146–2168.
  111. W.-N. Chen, D. Song, A. Ozgur, and P. Kairouz, “Privacy amplification via compression: Achieving the optimal privacy-accuracy-communication trade-off in distributed mean estimation,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  112. X. Zhang, Y. Kang, K. Chen, L. Fan, and Q. Yang, “Trading off privacy, utility, and efficiency in federated learning,” ACM Transactions on Intelligent Systems and Technology, vol. 14, no. 6, pp. 1–32, 2023.
  113. J. Kang, Z. Xiong, S. Niyato, and J. Zhang, “Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,” IEEE Internet of Things Journal, vol. 6, no. 6, pp. 10 700–10 714, 2019.
  114. J. Kang, Z. Xiong, D. Niyato, D. Ye, D. I. Kim, and J. Zhao, “Toward secure blockchain-enabled internet of vehicles: Optimizing consensus management using reputation and contract theory,” IEEE Transactions on Vehicular Technology, vol. 68, no. 3, pp. 2906–2920, 2019.
  115. Z. Hou, H. Chen, Y. Li, and B. Vucetic, “Incentive mechanism design for wireless energy harvesting-based internet of things,” IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2620–2632, 2017.
  116. T. Liu, J. Li, F. Shu, M. Tao, W. Chen, and Z. Han, “Design of contract-based trading mechanism for a small-cell caching system,” IEEE Transactions on Wireless Communications, vol. 16, no. 10, pp. 6602–6617, 2017.
  117. N. Ding, Z. Fang, and J. Huang, “Incentive mechanism design for federated learning with multi-dimensional private information,” in 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT).   IEEE, 2020, pp. 1–8.
  118. Y. Sarikaya and O. Ercetin, “Motivating workers in federated learning: A stackelberg game perspective,” IEEE Networking Letters, vol. 2, no. 1, pp. 23–27, 2019.
  119. S. Feng, D. Niyato, P. Wang, D. I. Kim, and Y.-C. Liang, “Joint service pricing and cooperative relay communication for federated learning,” in 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData).   IEEE, 2019, pp. 815–820.
  120. W. Y. B. Lim, J. Huang, Z. Xiong, J. Kang, D. Niyato, X.-S. Hua, C. Leung, and C. Miao, “Towards federated learning in uav-enabled internet of vehicles: A multi-dimensional contract-matching approach,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 5140–5154, 2021.
  121. S. R. Pandey, N. H. Tran, and C. S. Hong, “A crowdsourcing framework for on-device federated learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3241–3256, 2020.
  122. C. T. Dinh, N. H. Tran, M. N. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated learning over wireless networks: Convergence analysis and resource allocation,” IEEE/ACM Transadinh2020federatedctions on Networking, vol. 29, no. 1, pp. 398–409, 2020.
  123. L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. Nguyen, and C. S. Hong, “Federated learning for edge networks: Resource optimization and incentive mechanism,” IEEE Communications Magazine, vol. 58, no. 10, pp. 88–93, 2020.
  124. R. Hu and Y. Gong, “Trading data for learning: Incentive mechanism for on-device federated learning,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.   IEEE, 2020, pp. 1–6.
  125. J. Lee, D. Kim, and D. Niyato, “Market analysis of distributed learning resource management for internet of things: A game-theoretic approach,” IEEE Internet of Things Journal, vol. 7, no. 9, pp. 8430–8439, 2020.
  126. Y. Sarikaya and O. Ercetin, “Regulating workers in federated learning by yardstick competition,” in Proceedings of the 13th EAI International Conference on Performance Evaluation Methodologies and Tools, 2020, pp. 150–155.
  127. X. Qu, Q. Hu, and S. Wang, “Privacy-preserving model training architecture for intelligent edge computing,” Computer Communications, vol. 162, pp. 94–101, 2020.
  128. T. Song, Y. Tong, and S. Wei, “Profit allocation for federated learning,” in IEEE BigData, 2019, pp. 2577–2586.
  129. H. Yu, Z. Liu, Y. Liu, T. Chen, M. Cong, X. Weng, D. Niyato, and Q. Yang, “A sustainable incentive scheme for federated learning,” IEEE Intelligent Systems, vol. 35, no. 4, pp. 58–69, 2020.
  130. Z. Li, Z. Yang, S. Xie, W. Chen, and K. Liu, “Credit-based payments for fast computing resource trading in edge-assisted internet of things,” IEEE Internet of Things Journal, vol. 6, no. 4, pp. 6606–6617, 2019.
  131. N. Krishnaraj, K. Bellam, B. Sivakumar, and A. Daniel, “The future of cloud computing: Blockchain-based decentralized cloud/fog solutions–challenges, opportunities, and standards,” Blockchain Security in Cloud Computing, pp. 207–226, 2022.
  132. A. Zavodovski, S. Bayhan, N. Mohan, P. Zhou, W. Wong, and J. Kangasharju, “Decloud: Truthful decentralized double auction for edge clouds,” in Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS’19), 2019, pp. 2157–2167.
  133. H.-J. Hong, W. Fan, C. E. Chow, X. Zhou, and S.-Y. Chang, “Optimizing social welfare for task offloading in mobile edge computing,” in Proceedings of the 2020 IFIP Networking Conference (Networking’20), 2020, pp. 524–528.
  134. T. Bahreini, H. Badri, and D. Grosu, “An envy-free auction mechanism for resource allocation in edge computing systems,” in Proceedings of the 2018 IEEE/ACM Symposium on Edge Computing (SEC’18), 2018, pp. 313–322.
  135. G. Gao, M. Xiao, J. Wu, H. Huang, S. Wang, and G. Chen, “Auction-based vm allocation for deadline-sensitive tasks in distributed edge cloud,” IEEE Transactions on Services Computing, vol. 14, no. 6, pp. 1702–1716, 2019.
  136. Y. Jiao, P. Wang, D. Niyato, and Z. Xiong, “Social welfare maximization auction in edge computing resource allocation for mobile blockchain,” in Proceedings of the 2018 IEEE International Conference on Communications (ICC’18), 2018, pp. 1–6.
  137. Y. Jiao, P. Wang, D. Niyato, and K. Suankaewmanee, “Auction mechanisms in cloud/fog computing resource allocation for public blockchain networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 9, pp. 1975–1989, 2019.
  138. S. Yang, “A task offloading solution for internet of vehicles using combination auction matching model based on mobile edge computing,” IEEE Access, vol. 8, pp. 53 261–53 273, 2020.
  139. Y. Jiao, P. Wang, D. Niyato, B. Lin, and D. I. Kim, “Toward an automated auction framework for wireless federated learning services market,” IEEE Transactions on Mobile Computing, vol. 20, no. 10, pp. 3034–3048, 2020.
  140. R. Zeng, S. Zhang, J. Wang, and X. Chu, “Fmore: An incentive scheme of multi-dimensional auction for federated learning in MEC,” in ICDCS, 2020, pp. 278–288.
  141. C. Ying, H. Jin, X. Wang, and Y. Luo, “Double insurance: Incentivized federated learning with differential privacy in mobile crowdsensing,” in 2020 International Symposium on Reliable Distributed Systems (SRDS).   IEEE, 2020, pp. 81–90.
  142. T. H. T. Le, N. H. Tran, Y. K. Tun, Z. Han, and C. S. Hong, “Auction based incentive design for efficient federated learning in cellular wireless networks,” in WCNC, 2020, pp. 1–6.
  143. T. H. Thi Le, N. H. Tran, Y. K. Tun, M. N. H. Nguyen, S. R. Pandey, Z. Han, and C. S. Hong, “An incentive mechanism for federated learning in wireless cellular networks: An auction approach,” IEEE Transactions on Wireless Communications, vol. 20, no. 8, pp. 4874–4887, 2021.
  144. J. Zhang, Y. Wu, and R. Pan, “Incentive mechanism for horizontal federated learning based on reputation and reverse auction,” in WWW, 2021, p. 947–956.
  145. P. Roy, S. Sarker, M. A. Razzaque, M. Mamun-or Rashid, M. M. Hassan, and G. Fortino, “Distributed task allocation in mobile device cloud exploiting federated learning and subjective logic,” Journal of Systems Architecture, vol. 113, p. 101972, 2021.
  146. Y. Deng, F. Lyu, J. Ren, Y.-C. Chen, P. Yang, Y. Zhou, and Y. Zhang, “Fair: Quality-aware federated learning with precise user incentive and model aggregation,” in INFOCOM, 2021.
  147. J. Zhang, Y. Wu, and R. Pan, “Auction-based ex-post-payment incentive mechanism design for horizontal federated learning with reputation and contribution measurement,” arXiv preprint arXiv:2201.02410, 2022.
  148. ——, “Online auction-based incentive mechanism design for horizontal federated learning with budget constraint,” arXiv preprint, p. 2201.09047, 2022.
  149. X. Tang and H. Yu, “Utility-maximizing bidding strategy for data consumers in auction-based federated learning,” in Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME’23), 2023.
  150. X. Tang and H. Yu”, “Competitive-cooperative multi-agent reinforcement learning for auction-based federated learning,” in Proceedings of the 32nd International Joint Conference on Artificial Intelligence (IJCAI’23), 2023.
  151. X. Tang and H. Yu, “Multi-session budget optimization for forward auction-based federated learning,” arXiv preprint arXiv:2311.12548, 2023.
  152. Y. Liu, Z. Ai, S. Sun, S. Zhang, Z. Liu, and H. Yu, “Fedcoin: A peer-to-peer payment system for federated learning,” in Federated learning: privacy and incentive.   Springer, 2020, pp. 125–138.
  153. A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 19 586–19 597, 2020.
  154. R. Liu, P. Xing, Z. Deng, A. Li, C. Guan, and H. Yu, “Federated graph neural networks: Overview, techniques and challenges,” arXiv preprint arXiv:2202.07256, 2022.
  155. L. Yi, G. Wang, X. Liu, Z. Shi, and H. Yu, “FedGH: Heterogeneous federated learning with generalized global header,” in Proceedings of the 31st ACM International Conference on Multimedia (ACM MM’23), 2023, pp. 8686–8696.
  156. J. Wang, S. Cui, and F. Ma, “Fedlego: Enabling heterogenous model cooperation via brick reassembly in federated learning,” in International Workshop on Federated Learning for Distributed Data Mining, 2023.
  157. J. Wang, X. Yang, S. Cui, L. Che, L. Lyu, D. Xu, and F. Ma, “Towards personalized federated learning via heterogeneous model reassembly,” arXiv preprint arXiv:2308.08643, 2023.
  158. A. Li, R. Liu, M. Hu, L. A. Tuan, and H. Yu, “Towards interpretable federated learning,” arXiv preprint arXiv:2302.13473, 2023.
  159. K. Zhu, J. Wang, J. Zhou, Z. Wang, H. Chen, Y. Wang, L. Yang, W. Ye, N. Z. Gong, Y. Zhang et al., “Promptbench: Towards evaluating the robustness of large language models on adversarial prompts,” arXiv preprint arXiv:2306.04528, 2023.
  160. M. Kuchnik, V. Smith, and G. Amvrosiadis, “Validating large language models with relm,” Proceedings of Machine Learning and Systems, vol. 5, 2023.
  161. L. Nagalapatti and R. Narayanam, “Game of gradients: Mitigating irrelevant clients in federated learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, 2021, pp. 9046–9054.
  162. Z. Fan, H. Fang, Z. Zhou, J. Pei, M. P. Friedlander, and Y. Zhang, “Fair and efficient contribution valuation for vertical federated learning,” arXiv preprint arXiv:2201.02658, 2022.
  163. S. Wei, Y. Tong, Z. Zhou, and T. Song, “Efficient and fair data valuation for horizontal federated learning,” in Federated Learning.   Springer, 2020, pp. 139–152.
  164. T. Wang, J. Rausch, C. Zhang, R. Jia, and D. Song, “A principled approach to data valuation for federated learning,” in Federated Learning.   Springer, 2020, pp. 153–167.
  165. J. Wang, L. Zhang, A. Li, X. You, and H. Cheng, “Efficient participant contribution evaluation for horizontal and vertical federated learning,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE).   IEEE, 2022, pp. 911–923.
  166. Z. Liu, Y. Chen, H. Yu, Y. Liu, and L. Cui, “Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1–21, 2022.
  167. Z. Liu, Y. Chen, Y. Zhao, H. Yu, Y. Liu, R. Bao, J. Jiang, Z. Nie, Q. Xu, and Q. Yang, “Contribution-aware federated learning for smart healthcare,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 11, 2022, pp. 12 396–12 404.
  168. S. Zheng, Y. Cao, and M. Yoshikawa, “Secure shapley value for cross-silo federated learning,” arXiv preprint arXiv:2209.04856, 2022.
  169. S. Ma, Y. Cao, and L. Xiong, “Transparent contribution evaluation for secure federated learning on blockchain,” in 2021 IEEE 37th International Conference on Data Engineering Workshops (ICDEW).   IEEE, 2021, pp. 88–91.
  170. G. Wang, C. X. Dang, and Z. Zhou, “Measure contribution of participants in federated learning,” in IEEE Big Data, 2019, pp. 2597–2604.
  171. L. Zhang, L. Fan, Y. Luo, and L.-Y. Duan, “Intrinsic performance influence-based participant contribution estimation for horizontal federated learning,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 6, pp. 1–24, 2022.
  172. A. Li, L. Zhang, J. Wang, J. Tan, F. Han, Y. Qin, N. M. Freris, and X.-Y. Li, “Efficient federated-learning model debugging,” in 2021 IEEE 37th International Conference on Data Engineering (ICDE).   IEEE, 2021, pp. 372–383.
  173. P. W. Koh and P. Liang, “Understanding black-box predictions via influence functions,” in International conference on machine learning.   PMLR, 2017, pp. 1885–1894.
  174. Y. Xue, C. Niu, Z. Zheng, S. Tang, C. Lyu, F. Wu, and G. Chen, “Toward understanding the influence of individual clients in federated learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 560–10 567.
  175. P. W. W. Koh, K.-S. Ang, H. Teo, and P. S. Liang, “On the accuracy of influence functions for measuring group effects,” Advances in neural information processing systems, vol. 32, 2019.
  176. A. Li, L. Zhang, J. Wang, F. Han, and X.-Y. Li, “Privacy-preserving efficient federated-learning model debugging,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 10, pp. 2291–2303, 2021.
  177. Y. Kwon, E. Wu, K. Wu, and J. Zou, “Datainf: Efficiently estimating data influence in lora-tuned llms and diffusion models,” arXiv preprint arXiv:2310.00902, 2023.
  178. A. Li, L. Zhang, J. Tan, Y. Qin, J. Wang, and X.-Y. Li, “Sample-level data selection for federated learning,” in IEEE INFOCOM 2021-IEEE Conference on Computer Communications.   IEEE, 2021, pp. 1–10.
  179. P. Cassara, A. Gotta, and L. Valerio, “Federated feature selection for cyber-physical systems of systems,” IEEE Transactions on Vehicular Technology, vol. 71, no. 9, pp. 9937–9950, 2022.
  180. X. Li, R. Dowsley, and M. De Cock, “Privacy-preserving feature selection with secure multiparty computation,” in International Conference on Machine Learning.   PMLR, 2021, pp. 6326–6336.
  181. F. Pan, D. Meng, Y. Zhang, and X. Li, “Secure federated feature selection for cross-feature federated learning,” arXiv preprint, 2020.
  182. G. Wang, “Interpret federated learning with shapley values,” arXiv preprint arXiv:1905.04519, 2019.
  183. Y. Chen, Y. Ning, Z. Chai, and H. Rangwala, “Federated multi-task learning with hierarchical attention for sensor data analytics,” in 2020 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2020, pp. 1–8.
  184. R. Younis, Z. Ahmadi, A. Hakmeh, and M. Fisichella, “Flames2graph: An interpretable federated multivariate time series classification framework,” 2023.
  185. A. Li, H. Peng, L. Zhang, J. Huang, Q. Guo, H. Yu, and Y. Liu, “Fedsdg-fs: Efficient and secure feature selection for vertical federated learning,” arXiv preprint arXiv:2302.10417, 2023.
  186. A. Li, J. Huang, J. Jia, H. Peng, L. Zhang, L. A. Tuan, H. Yu, and X.-Y. Li, “Efficient and privacy-preserving feature importance-based vertical federated learning,” IEEE Transactions on Mobile Computing, no. 01, pp. 1–17, 2023.
  187. C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004, pp. 74–81.
  188. T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675, 2019.
  189. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
  190. Y.-T. Lin and Y.-N. Chen, “Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models,” arXiv preprint arXiv:2305.13711, 2023.
  191. Y. Wang, Z. Yu, Z. Zeng, L. Yang, C. Wang, H. Chen, C. Jiang, R. Xie, J. Wang, X. Xie et al., “Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization,” arXiv preprint arXiv:2306.05087, 2023.
  192. S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg et al., “Sparks of artificial general intelligence: Early experiments with gpt-4,” arXiv preprint arXiv:2303.12712, 2023.
  193. N. Jain, K. Saifullah, Y. Wen, J. Kirchenbauer, M. Shu, A. Saha, M. Goldblum, J. Geiping, and T. Goldstein, “Bring your own data! self-supervised evaluation for large language models,” arXiv preprint arXiv:2306.13651, 2023.
  194. S. Schoch, R. Mishra, and Y. Ji, “Data selection for fine-tuning large language models using transferred shapley values,” arXiv preprint arXiv:2306.10165, 2023.
  195. P. Xing, S. Lu, and H. Yu, “Fedlogic: Interpretable federated multi-domain chain-of-thought prompt selection for large language models,” arXiv preprint arXiv:2308.15324, 2023.
  196. J. Zhao, “Privacy-preserving fine-tuning of artificial intelligence (ai) foundation models with federated learning, differential privacy, offsite tuning, and parameter-efficient fine-tuning (peft),” Authorea Preprints, 2023.
  197. B. Y. Lin, C. He, Z. Zeng, H. Wang, Y. Huang, C. Dupuy, R. Gupta, M. Soltanolkotabi, X. Ren, and S. Avestimehr, “Fednlp: Benchmarking federated learning methods for natural language processing tasks,” arXiv preprint arXiv:2104.08815, 2021.
  198. O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, “Measuring and narrowing the compositionality gap in language models,” arXiv preprint arXiv:2210.03350, 2022.
  199. W.-C. Kang, J. Ni, N. Mehta, M. Sathiamoorthy, L. Hong, E. Chi, and D. Z. Cheng, “Do llms understand user preferences? evaluating llms on user rating prediction,” arXiv preprint arXiv:2305.06474, 2023.
  200. S. Dai, N. Shao, H. Zhao, W. Yu, Z. Si, C. Xu, Z. Sun, X. Zhang, and J. Xu, “Uncovering chatgpt’s capabilities in recommender systems,” arXiv preprint arXiv:2305.02182, 2023.
  201. C. He, S. Li, J. So, X. Zeng, M. Zhang, H. Wang, X. Wang, P. Vepakomma, A. Singh, H. Qiu et al., “Fedml: A research library and benchmark for federated machine learning,” arXiv preprint arXiv:2007.13518, 2020.
  202. W. Kuang, B. Qian, Z. Li, D. Chen, D. Gao, X. Pan, Y. Xie, Y. Li, B. Ding, and J. Zhou, “Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning,” arXiv preprint arXiv:2309.00363, 2023.
  203. T. Fan, Y. Kang, G. Ma, W. Chen, W. Wei, L. Fan, and Q. Yang, “Fate-llm: A industrial grade federated learning framework for large language models,” arXiv preprint arXiv:2310.10049, 2023.
  204. C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun, “Splitfed: When federated learning meets split learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8485–8493.
  205. S. M. Shah and V. K. Lau, “Model compression for communication efficient federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  206. C. He, M. Annavaram, and S. Avestimehr, “Group knowledge transfer: Federated learning of large CNNs at the edge,” Advances in Neural Information Processing Systems, vol. 33, pp. 14 068–14 080, 2020.
  207. J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” arXiv preprint arXiv:1610.02527, 2016.
  208. C. Ren, H. Yu, R. Yan, Q. Li, Y. Xu, D. Niyato, and Z. Y. Dong, “Secfedsa: A secure differential privacy-based federated learning approach for smart cyber-physical grid stability assessment,” IEEE Internet of Things Journal, 2023.
  209. Y. Shen, J. Shao, X. Zhang, Z. Lin, H. Pan, D. Li, J. Zhang, and K. B. Letaief, “Large language models empowered autonomous edge ai for connected intelligence,” IEEE Communications Magazine, 2024.
  210. D. Wu, W. Yang, H. Jin, X. Zou, W. Xia, and B. Fang, “Fedcomp: A federated learning compression framework for resource-constrained edge computing devices,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023.
  211. Y. Mao, Z. Zhao, M. Yang, L. Liang, Y. Liu, W. Ding, T. Lan, and X.-P. Zhang, “Safari: Sparsity-enabled federated learning with limited and unreliable communications,” IEEE Transactions on Mobile Computing, 2023.
  212. C. Xu, Y. Qu, Y. Xiang, and L. Gao, “Asynchronous federated learning on heterogeneous devices: A survey,” Computer Science Review, vol. 50, p. 100595, 2023.
  213. M. Liu, S. Ho, M. Wang, L. Gao, Y. Jin, and H. Zhang, “Federated learning meets natural language processing: A survey,” arXiv preprint arXiv:2107.12603, 2021.
  214. M. Chawla, G. R. Gupta, S. Gaddam, and M. Wadhwa, “Beyond federated learning for iot: Efficient split learning with caching & model customization,” IEEE INTERNET OF THINGS JOURNAL, p. 1, 2024.
  215. W. Zhang, T. Zhou, Q. Lu, Y. Yuan, A. Tolba, and W. Said, “Fedsl: A communication efficient federated learning with split layer aggregation,” IEEE Internet of Things Journal, 2024.
  216. Y. Liao, Y. Xu, H. Xu, Z. Yao, L. Wang, and C. Qiao, “Accelerating federated learning with data and model parallelism in edge computing,” IEEE/ACM Transactions on Networking, 2023.
  217. M. Aledhari, R. Razzak, R. M. Parizi, and F. Saeed, “Federated learning: A survey on enabling technologies, protocols, and applications,” IEEE Access, vol. 8, pp. 140 699–140 725, 2020.
  218. N. H. Tran, W. Bao, A. Zomaya, M. N. Nguyen, and C. S. Hong, “Federated learning over wireless networks: Optimization model design and analysis,” in IEEE INFOCOM 2019-IEEE conference on computer communications.   IEEE, 2019, pp. 1387–1395.
  219. Z. Charles, N. Mitchell, K. Pillutla, M. Reneer, and Z. Garrett, “Towards federated foundation models: Scalable dataset pipelines for group-structured learning,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  220. W. Jin, Y. Yao, S. Han, C. Joe-Wong, S. Ravi, S. Avestimehr, and C. He, “Fedml-he: An efficient homomorphic-encryption-based privacy-preserving federated learning system,” arXiv preprint arXiv:2303.10837, 2023.
  221. S. A. Rieyan, M. R. K. News, A. M. Rahman, S. A. Khan, S. T. J. Zaarif, M. G. R. Alam, M. M. Hassan, M. Ianni, and G. Fortino, “An advanced data fabric architecture leveraging homomorphic encryption and federated learning,” Information Fusion, vol. 102, p. 102004, 2024.
  222. R. Aziz, S. Banerjee, S. Bouzefrane, and T. Le Vinh, “Exploring homomorphic encryption and differential privacy techniques towards secure federated learning paradigm,” Future internet, vol. 15, no. 9, p. 310, 2023.
  223. M. Fariborz, M. Samani, P. Fotouhi, R. Proietti, I.-M. Yi, V. Akella, J. Lowe-Power, S. Palermo, and S. B. Yoo, “LLM: Realizing low-latency memory by exploiting embedded silicon photonics for irregular workloads,” in International Conference on High Performance Computing.   Springer, 2022, pp. 44–64.
  224. A. Jaiswal, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang, “Compressing LLMs: The truth is rarely pure and never simple,” arXiv preprint arXiv:2310.01382, 2023.
  225. M. Mestoukirdi, O. Esrafilian, D. Gesbert, Q. Li, and N. Gresset, “Sparser random networks exist: Enforcing communication-efficient federated learning via regularization,” arXiv preprint arXiv:2309.10834, 2023.
  226. Q. Xia, W. Ye, Z. Tao, J. Wu, and Q. Li, “A survey of federated learning for edge computing: Research problems and solutions,” High-Confidence Computing, vol. 1, no. 1, p. 100008, 2021.
  227. X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning,” Ieee Network, vol. 33, no. 5, pp. 156–165, 2019.
  228. K. Yue, R. Jin, C.-W. Wong, and H. Dai, “Communication-efficient federated learning via predictive coding,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 3, pp. 369–380, 2022.
  229. D. J. Beutel, T. Topal, A. Mathur, X. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, K. H. Li, T. Parcollet, P. P. B. de Gusmão et al., “Flower: A friendly federated learning research framework,” arXiv preprint arXiv:2007.14390, 2020.
  230. M. Xu, Y. Wu, D. Cai, X. Li, and S. Wang, “Federated fine-tuning of billion-sized language models across mobile devices,” arXiv preprint arXiv:2308.13894, 2023.
  231. C. Ren, Y. Xu, and R. Zhang, “An interpretable deep learning method for power system transient stability assessment via tree regularization,” IEEE Transactions on Power Systems, vol. 37, no. 5, pp. 3359–3369, 2021.
  232. Y. Shi, H. Yu, and C. Leung, “Towards fairness-aware federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  233. C. Ren and Y. Xu, “Robustness verification for machine-learning-based power system dynamic security assessment models under adversarial examples,” IEEE Transactions on Control of Network Systems, vol. 9, no. 4, pp. 1645–1654, 2022.
  234. C. Ren, X. Du, Y. Xu, Q. Song, Y. Liu, and R. Tan, “Vulnerability analysis, robustness verification, and mitigation strategy for machine learning-based power system stability assessment model under adversarial examples,” IEEE Transactions on Smart Grid, vol. 13, no. 2, pp. 1622–1632, 2021.
  235. C. Ren and Y. Xu, “A universal defense strategy for data-driven power system stability assessment models under adversarial examples,” IEEE Internet of Things Journal, 2022.
  236. Y. Liu, J. Peng, J. Kang, A. M. Iliyasu, D. Niyato, and A. A. Abd El-Latif, “A secure federated learning framework for 5g networks,” IEEE Wireless Communications, vol. 27, no. 4, pp. 24–31, 2020.
  237. M. Fang, X. Cao, J. Jia, and N. Gong, “Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1605–1622.
  238. J. Tian, B. Wang, R. Guo, Z. Wang, K. Cao, and X. Wang, “Adversarial attacks and defenses for deep-learning-based unmanned aerial vehicles,” IEEE Internet of Things Journal, vol. 9, no. 22, pp. 22 399–22 409, 2021.
  239. Y. Lin, S. Han, H. Mao, Y. Wang, and W. J. Dally, “Deep gradient compression: Reducing the communication bandwidth for distributed training,” arXiv preprint arXiv:1712.01887, 2017.
  240. J. Tian, C. Shen, B. Wang, X. Xia, M. Zhang, C. Lin, and Q. Li, “Lesson: Multi-label adversarial false data injection attack for deep learning locational detection,” IEEE Transactions on Dependable and Secure Computing, 2024.
  241. J. Tian, B. Wang, Z. Wang, K. Cao, J. Li, and M. Ozay, “Joint adversarial example and false data injection attacks for state estimation in power systems,” IEEE Transactions on Cybernetics, vol. 52, no. 12, pp. 13 699–13 713, 2021.
  242. T. Huang, W. Yang, J. Wu, J. Ma, X. Zhang, and D. Zhang, “A survey on green 6g network: Architecture and technologies,” IEEE access, vol. 7, pp. 175 758–175 768, 2019.
  243. B. Q. Group, “Quafu-rl: The cloud quantum computers based quantum reinforcement learning,” arXiv preprint arXiv:2305.17966, 2023.
  244. Z. Liang, J. Cheng, R. Yang, H. Ren, Z. Song, D. Wu, X. Qian, T. Li, and Y. Shi, “Unleashing the potential of llms for quantum computing: A study in quantum architecture design,” arXiv preprint arXiv:2307.08191, 2023.
  245. C. Ren, H. Yu, R. Yan, M. Xu, Y. Shen, H. Zhu, D. Niyato, Z. Y. Dong, and L. C. Kwek, “Towards quantum federated learning,” arXiv preprint arXiv:2306.09912, 2023.
  246. A. Khan, G. Saha, and R. K. Pal, “Quantum computing based inference of grns,” in Bioinformatics and Biomedical Engineering: 5th International Work-Conference, IWBBIO 2017, Granada, Spain, April 26–28, 2017, Proceedings, Part II 5.   Springer, 2017, pp. 221–233.
  247. S. A. Khan, F. Hu, G. Angelatos, and H. E. Türeci, “Physical reservoir computing using finitely-sampled quantum systems,” arXiv preprint arXiv:2110.13849, 2021.
  248. J. Bausch, “Recurrent quantum neural networks,” Advances in neural information processing systems, vol. 33, pp. 1368–1379, 2020.
  249. E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,” arXiv preprint arXiv:1411.4028, 2014.
  250. A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature communications, vol. 5, no. 1, p. 4213, 2014.
  251. W. M. Kirby and P. J. Love, “Variational quantum eigensolvers for sparse hamiltonians,” Physical review letters, vol. 127, no. 11, p. 110503, 2021.
  252. N. Kuete Meli, F. Mannel, and J. Lellmann, “A universal quantum algorithm for weighted maximum cut and ising problems,” Quantum Information Processing, vol. 22, no. 7, p. 279, 2023.
  253. R. Kaewpuang, M. Xu, D. Niyato, H. Yu, Z. Xiong, and X. S. Shen, “Adaptive resource allocation in quantum key distribution (qkd) for federated learning,” in 2023 International Conference on Computing, Networking and Communications (ICNC).   IEEE, 2023, pp. 71–76.
  254. C. Ren, H. Xu, Minrui Yu, Z. Xiong, Z. Zhang, and D. Niyato, “Variational quantum circuit and quantum key distribution-based quantum federated learning: A case of smart grid dynamic security assessment,” International Conference on Communications, 2024.
  255. C. Ren, R. Yan, M. Xu, H. Yu, Y. Xu, D. Niyato, and Z. Y. Dong, “Qfdsa: A quantum-secured federated learning system for smart grid dynamic security assessment,” IEEE Internet of Things Journal, 2023.
  256. D. Gurung, S. R. Pokhrel, and G. Li, “Performance analysis and evaluation of post quantum secure blockchain federated learning,” arXiv preprint arXiv:2306.14772, 2023.
  257. B. Buyukates, J. So, H. Mahdavifar, and S. Avestimehr, “Lightverifl: Lightweight and verifiable secure federated learning,” in Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022), 2022.
  258. S. Barz, E. Kashefi, A. Broadbent, J. F. Fitzsimons, A. Zeilinger, and P. Walther, “Demonstration of blind quantum computing,” science, vol. 335, no. 6066, pp. 303–308, 2012.
  259. G.-J. Qu and M.-M. Wang, “Secure multi-party quantum computation based on blind quantum computation,” International Journal of Theoretical Physics, vol. 60, no. 8, pp. 3003–3012, 2021.
  260. R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, “Quantum entanglement,” Reviews of modern physics, vol. 81, no. 2, p. 865, 2009.
  261. J. Troupe, S. Haldar, I. Agullo, and P. Kwiat, “Quantum clock synchronization for future nasa deep space quantum links and fundamental science,” arXiv preprint arXiv:2209.15122, 2022.
  262. J. R. Friedman, V. Patel, W. Chen, S. Tolpygo, and J. E. Lukens, “Quantum superposition of distinct macroscopic states,” nature, vol. 406, no. 6791, pp. 43–46, 2000.
  263. O. Romero-Isart, A. C. Pflanzer, F. Blaser, R. Kaltenbaek, N. Kiesel, M. Aspelmeyer, and J. I. Cirac, “Large quantum superpositions and interference of massive nanometer-sized objects,” Physical review letters, vol. 107, no. 2, p. 020405, 2011.
  264. D. K. Park, I. Sinayskiy, M. Fingerhuth, F. Petruccione, and J.-K. K. Rhee, “Quantum forking for fast weighted power summation,” arXiv preprint arXiv:1902.07959, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Chao Ren (35 papers)
  2. Han Yu (218 papers)
  3. Hongyi Peng (4 papers)
  4. Xiaoli Tang (23 papers)
  5. Anran Li (24 papers)
  6. Yulan Gao (25 papers)
  7. Alysa Ziying Tan (2 papers)
  8. Bo Zhao (242 papers)
  9. Xiaoxiao Li (144 papers)
  10. Zengxiang Li (17 papers)
  11. Qiang Yang (202 papers)
  12. Liping Yi (8 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com