Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Continual Learning via Knowledge Fusion: A Survey (2312.16475v1)

Published 27 Dec 2023 in cs.LG and cs.AI

Abstract: Data privacy and silos are nontrivial and greatly challenging in many real-world applications. Federated learning is a decentralized approach to training models across multiple local clients without the exchange of raw data from client devices to global servers. However, existing works focus on a static data environment and ignore continual learning from streaming data with incremental tasks. Federated Continual Learning (FCL) is an emerging paradigm to address model learning in both federated and continual learning environments. The key objective of FCL is to fuse heterogeneous knowledge from different clients and retain knowledge of previous tasks while learning on new ones. In this work, we delineate federated learning and continual learning first and then discuss their integration, i.e., FCL, and particular FCL via knowledge fusion. In summary, our motivations are four-fold: we (1) raise a fundamental problem called ''spatial-temporal catastrophic forgetting'' and evaluate its impact on the performance using a well-known method called federated averaging (FedAvg), (2) integrate most of the existing FCL methods into two generic frameworks, namely synchronous FCL and asynchronous FCL, (3) categorize a large number of methods according to the mechanism involved in knowledge fusion, and finally (4) showcase an outlook on the future work of FCL.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (140)
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  2. B. Dong and X. Wang, “Comparison deep learning method to traditional methods using for network intrusion detection,” in 2016 8th IEEE international conference on communication software and networks (ICCSN).   IEEE, 2016, pp. 581–585.
  3. V. Mothukuri, R. M. Parizi, S. Pouriyeh, Y. Huang, A. Dehghantanha, and G. Srivastava, “A survey on security and privacy of federated learning,” Future Generation Computer Systems, vol. 115, pp. 619–640, 2021.
  4. C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on federated learning,” Knowledge-Based Systems, vol. 216, p. 106775, 2021.
  5. M. Aledhari, R. Razzak, R. M. Parizi, and F. Saeed, “Federated learning: A survey on enabling technologies, protocols, and applications,” IEEE Access, vol. 8, pp. 140 699–140 725, 2020.
  6. N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein et al., “The future of digital health with federated learning,” NPJ digital medicine, vol. 3, no. 1, p. 119, 2020.
  7. J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, and M. Guizani, “Reliable federated learning for mobile networks,” IEEE Wireless Communications, vol. 27, no. 2, pp. 72–80, 2020.
  8. T. S. Brisimi, R. Chen, T. Mela, A. Olshevsky, I. C. Paschalidis, and W. Shi, “Federated learning of predictive models from federated electronic health records,” International journal of medical informatics, vol. 112, pp. 59–67, 2018.
  9. T. Diethe, T. Borchert, E. Thereska, B. Balle, and N. Lawrence, “Continual learning in practice,” stat, vol. 1050, p. 18, 2019.
  10. M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychology of learning and motivation.   Elsevier, 1989, vol. 24, pp. 109–165.
  11. A. Ghosh, J. Hong, D. Yin, and K. Ramchandran, “Robust federated learning in a heterogeneous environment,” arXiv preprint arXiv:1906.06629, 2019.
  12. B. Luo, W. Xiao, S. Wang, J. Huang, and L. Tassiulas, “Tackling system and statistical heterogeneity for federated learning with adaptive client sampling,” in IEEE INFOCOM 2022-IEEE conference on computer communications.   IEEE, 2022, pp. 1739–1748.
  13. Z. Zhu, J. Hong, and J. Zhou, “Data-free knowledge distillation for heterogeneous federated learning,” in International Conference on Machine Learning.   PMLR, 2021, pp. 12 878–12 889.
  14. S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE journal on selected areas in communications, vol. 37, no. 6, pp. 1205–1221, 2019.
  15. P. Kairouz, Z. Liu, and T. Steinke, “The distributed discrete gaussian mechanism for federated learning with secure aggregation,” in International Conference on Machine Learning.   PMLR, 2021, pp. 5201–5212.
  16. A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Model poisoning attacks in federated learning,” in Proc. Workshop Secur. Mach. Learn.(SecML) 32nd Conf. Neural Inf. Process. Syst.(NeurIPS), 2018, pp. 1–23.
  17. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
  18. W. Luping, W. Wei, and L. Bo, “Cmfl: Mitigating communication overhead for federated learning,” in 2019 IEEE 39th international conference on distributed computing systems (ICDCS).   IEEE, 2019, pp. 954–964.
  19. Y. Lin, S. Han, H. Mao, Y. Wang, and B. Dally, “Deep gradient compression: Reducing the communication bandwidth for distributed training,” in International Conference on Learning Representations, 2017.
  20. L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020.
  21. P. Regulation, “General data protection regulation,” Intouch, vol. 25, pp. 1–5, 2018.
  22. Y. Zeng, E. Lu, Y. Sun, and R. Tian, “Responsible facial recognition and beyond,” arXiv preprint arXiv:1909.12935, 2019.
  23. Q. Li, Z. Wen, Z. Wu, S. Hu, N. Wang, Y. Li, X. Liu, and B. He, “A survey on federated learning systems: Vision, hype and reality for data privacy and protection,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 4, pp. 3347–3366, 2023.
  24. M. B. M. E. R. D., “Communication-efficient learning of deep networks from decentralized data,” International Conference on Artificial Intelligence and Statistics (AISTATS), 2017.
  25. Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  26. B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” arXiv preprint arXiv:2001.02610, 2020.
  27. J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?” Advances in Neural Information Processing Systems, vol. 33, pp. 16 937–16 947, 2020.
  28. L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and S. Y. Philip, “Privacy and robustness in federated learning: Attacks and defenses,” IEEE transactions on neural networks and learning systems, 2022.
  29. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 603–618.
  30. T. Yu, E. Bagdasaryan, and V. Shmatikov, “Salvaging federated learning by local adaptation,” arXiv preprint arXiv:2002.04758, 2020.
  31. G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural networks, vol. 113, pp. 54–71, 2019.
  32. R. Kemker, M. McClure, A. Abitino, T. Hayes, and C. Kanan, “Measuring catastrophic forgetting in neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, 2018.
  33. G. Kim, C. Xiao, T. Konishi, Z. Ke, and B. Liu, “A theoretical study on solving continual learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 5065–5079, 2022.
  34. R. Hadsell, D. Rao, A. A. Rusu, and R. Pascanu, “Embracing change: Continual learning in deep neural networks,” Trends in cognitive sciences, vol. 24, no. 12, pp. 1028–1040, 2020.
  35. Z. Ke, B. Liu, and X. Huang, “Continual learning of a mixed sequence of similar and dissimilar tasks,” Advances in Neural Information Processing Systems, vol. 33, pp. 18 493–18 504, 2020.
  36. Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, and S. Sanner, “Online continual learning in image classification: An empirical survey,” Neurocomputing, vol. 469, pp. 28–51, 2022.
  37. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  38. A. Mallya and S. Lazebnik, “Packnet: Adding multiple tasks to a single network by iterative pruning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018, pp. 7765–7773.
  39. C. Fernando, D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra, “Pathnet: Evolution channels gradient descent in super neural networks,” arXiv preprint arXiv:1701.08734, 2017.
  40. A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” arXiv preprint arXiv:1606.04671, 2016.
  41. R. Aljundi, P. Chakravarty, and T. Tuytelaars, “Expert gate: Lifelong learning with a network of experts,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3366–3375.
  42. M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Transactions on pattern analysis and machine intelligence, vol. 44, no. 7, pp. 3366–3385, 2021.
  43. D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne, “Experience replay for continual learning,” Advances in Neural Information Processing Systems, vol. 32, pp. 350–360, 2019.
  44. H. Shin, J. K. Lee, J. Kim, and J. Kim, “Continual learning with deep generative replay,” Advances in neural information processing systems, vol. 30, 2017.
  45. B. Heo, M. Lee, S. Yun, and J. Y. Choi, “Knowledge transfer via distillation of activation boundaries formed by hidden neurons,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 3779–3787.
  46. Z. Li and D. Hoiem, “Learning without forgetting,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2935–2947, 2017.
  47. N. Shoham, T. Avidor, A. Keren, N. Israel, D. Benditkis, L. Mor-Yosef, and I. Zeitak, “Overcoming forgetting in federated learning on non-iid data,” arXiv preprint arXiv:1910.07796, 2019.
  48. X. Yao and L. Sun, “Continual local training for better initialization of federated models,” in 2020 IEEE International Conference on Image Processing (ICIP).   IEEE, 2020, pp. 1736–1740.
  49. F. E. Casado, D. Lema, R. Iglesias, C. V. Regueiro, and S. Barro, “Concept drift detection and adaptation for robotics and mobile devices in federated and continual settings,” in Advances in Physical Agents II: Proceedings of the 21st International Workshop of Physical Agents (WAF 2020), November 19-20, 2020, Alcalá de Henares, Madrid, Spain.   Springer, 2021, pp. 79–93.
  50. M. F. Criado, F. E. Casado, R. Iglesias, C. V. Regueiro, and S. Barro, “Non-iid data and continual learning processes in federated learning: A long road ahead,” Information Fusion, vol. 88, pp. 263–280, 2022.
  51. A. Usmanova, F. Portet, P. Lalanda, and G. Vega, “Federated learning and catastrophic forgetting in pervasive computing: demonstration in har domain,” in 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops).   IEEE, 2022, pp. 310–315.
  52. Y. Ma, Z. Xie, J. Wang, K. Chen, and L. Shou, “Continual federated learning based on knowledge distillation,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22.   International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 2182–2188.
  53. T. D. Bui, C. V. Nguyen, S. Swaroop, and R. E. Turner, “Partitioned variational inference: A unified framework encompassing federated and continual learning,” arXiv preprint arXiv:1811.11206, 2018.
  54. J. Yoon, W. Jeong, G. Lee, E. Yang, and S. J. Hwang, “Federated continual learning with weighted inter-client transfer,” in International Conference on Machine Learning.   PMLR, 2021, pp. 12 073–12 086.
  55. T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE signal processing magazine, vol. 37, no. 3, pp. 50–60, 2020.
  56. G. Wei and X. Li, “Knowledge lock: Overcoming catastrophic forgetting in federated learning,” in Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, May 16–19, 2022, Proceedings, Part I.   Springer, 2022, pp. 601–612.
  57. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  58. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.
  59. Y. Guo, T. Lin, and X. Tang, “Towards federated learning on time-evolving heterogeneous data,” arXiv preprint arXiv:2112.13246, 2021.
  60. Y. Yang, Z. Ma, B. Xiao, Y. Liu, T. Li, and J. Zhang, “Reveal your images: Gradient leakage attack against unbiased sampling-based secure aggregation,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–14, 2023.
  61. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
  62. G. M. Van de Ven and A. S. Tolias, “Three scenarios for continual learning,” arXiv preprint arXiv:1904.07734, 2019.
  63. Z. Wang, Y. Zhang, X. Xu, Z. Fu, H. Yang, and W. Du, “Federated probability memory recall for federated continual learning,” Information Sciences, vol. 629, pp. 551–565, 2023.
  64. F. E. Casado, D. Lema, R. Iglesias, C. V. Regueiro, and S. Barro, “Federated and continual learning for classification tasks in a society of devices,” arXiv preprint arXiv:2006.07129, 2020.
  65. G. Zizzo, A. Rawat, N. Holohan, and S. Tirupathi, “Federated continual learning with differentially private data sharing,” in Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022), 2022.
  66. Y. Luopan, R. Han, Q. Zhang, C. H. Liu, and G. Wang, “Fedknow: Federated continual learning with signature task knowledge integration at edge,” arXiv preprint arXiv:2212.01738, 2022.
  67. K. Luo, X. Li, Y. Lan, and M. Gao, “Gradma: A gradient-memory-based accelerated federated learning with alleviated catastrophic forgetting,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3708–3717.
  68. V. Kulkarni, M. Kulkarni, and A. Pant, “Survey of personalization techniques for federated learning,” in 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4).   IEEE, 2020, pp. 794–797.
  69. F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-iid data,” IEEE transactions on neural networks and learning systems, vol. 31, no. 9, pp. 3400–3413, 2019.
  70. F. E. Castellon, A. Mayoue, J.-H. Sublemontier, and C. Gouy-Pailler, “Federated learning with incremental clustering for heterogeneous data,” in 2022 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2022, pp. 1–8.
  71. G. Tong, G. Li, J. Wu, and J. Li, “Gradmfl: Gradient memory-based federated learning for hierarchical knowledge transferring over non-iid data,” in Algorithms and Architectures for Parallel Processing: 21st International Conference, ICA3PP 2021, Virtual Event, December 3–5, 2021, Proceedings, Part I.   Springer, 2022, pp. 612–626.
  72. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 2938–2948.
  73. Z. Zhong, Y. Zhou, D. Wu, X. Chen, M. Chen, C. Li, and Q. Z. Sheng, “P-fedavg: Parallelizing federated learning with theoretical guarantees,” in IEEE INFOCOM 2021-IEEE Conference on Computer Communications.   IEEE, 2021, pp. 1–10.
  74. Y. Xia, D. Yang, W. Li, A. Myronenko, D. Xu, H. Obinata, H. Mori, P. An, S. Harmon, E. Turkbey et al., “Auto-fedavg: learnable federated averaging for multi-institutional medical image segmentation,” arXiv preprint arXiv:2104.10195, 2021.
  75. L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Fedavg with fine tuning: Local updates lead to representation learning,” in Advances in Neural Information Processing Systems, 2022.
  76. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP).   IEEE, 2017, pp. 3–18.
  77. M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in 2019 IEEE symposium on security and privacy (SP).   IEEE, 2019, pp. 739–753.
  78. Y. Huang, C. Bert, S. Fischer, M. Schmidt, A. Dörfler, A. Maier, R. Fietkau, and F. Putz, “Continual learning for peer-to-peer federated learning: A study on automated brain metastasis identification,” arXiv preprint arXiv:2204.13591, 2022.
  79. F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in International conference on machine learning.   PMLR, 2017, pp. 3987–3995.
  80. Z. Zhang, Y. Zhang, D. Guo, S. Zhao, and X. Zhu, “Communication-efficient federated continual learning for distributed learning system with non-iid data,” Science China Information Sciences, vol. 66, no. 2, p. 122102, 2023.
  81. T. Lesort, A. Stoian, and D. Filliat, “Regularization shortcomings for continual learning,” arXiv preprint arXiv:1912.03049, 2019.
  82. J. Yoon, S. Kim, E. Yang, and S. J. Hwang, “Scalable and order-robust continual learning with additive parameter decomposition,” in Eighth International Conference on Learning Representations, ICLR 2020.   ICLR, 2020.
  83. V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual attention,” Advances in neural information processing systems, vol. 27, 2014.
  84. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  85. M.-T. Luong, H. Pham, and C. D. Manning, “Effective approaches to attention-based neural machine translation,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 1412–1421.
  86. S. Ji, S. Pan, G. Long, X. Li, J. Jiang, and Z. Huang, “Learning private neural language modeling with attentive aggregation,” in 2019 International joint conference on neural networks (IJCNN).   IEEE, 2019, pp. 1–8.
  87. A. H. Estiri and M. Maheswaran, “Attentive federated learning for concept drift in distributed 5g edge networks,” arXiv preprint arXiv:2111.07457, 2021.
  88. K. Hu, M. Lu, Y. Li, S. Gong, J. Wu, F. Zhou, S. Jiang, and Y. Yang, “A federated incremental learning algorithm based on dual attention mechanism,” Applied Sciences, vol. 12, no. 19, p. 10025, 2022.
  89. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  90. B. Li, S. Chen, and Z. Peng, “New generation federated learning,” Sensors, vol. 22, no. 21, p. 8475, 2022.
  91. J. Serra, D. Suris, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” in International Conference on Machine Learning.   PMLR, 2018, pp. 4548–4557.
  92. Z. Zhu and L. Sun, “Initialize with mask: For more efficient federated learning,” in MultiMedia Modeling: 27th International Conference, MMM 2021, Prague, Czech Republic, June 22–24, 2021, Proceedings, Part II 27.   Springer, 2021, pp. 111–120.
  93. J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” Advances in neural information processing systems, vol. 27, 2014.
  94. M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” arXiv preprint arXiv:1912.00818, 2019.
  95. L. Corinzia, A. Beuret, and J. M. Buhmann, “Variational federated multi-task learning,” arXiv preprint arXiv:1906.06268, 2019.
  96. J. Lopez, J. E. Rubio, and C. Alcaraz, “Digital twins for intelligent authorization in the b5g-enabled smart grid,” IEEE Wireless Communications, vol. 28, no. 2, pp. 48–55, 2021.
  97. D. Loghin, S. Cai, G. Chen, T. T. A. Dinh, F. Fan, Q. Lin, J. Ng, B. C. Ooi, X. Sun, Q.-T. Ta, W. Wang, X. Xiao, Y. Yang, M. Zhang, and Z. Zhang, “The disruptions of 5g on data-driven technologies and applications,” IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 6, pp. 1179–1198, 2020.
  98. Z. Zhang, B. Guo, W. Sun, Y. Liu, and Z. Yu, “Cross-fcl: Toward a cross-edge federated continual learning framework in mobile edge computing systems,” IEEE Transactions on Mobile Computing, 2022.
  99. Y. Chaudhary, P. Rai, M. Schubert, H. Schütze, and P. Gupta, “Federated continual learning for text classification via selective inter-client transfer,” arXiv preprint arXiv:2210.06101, 2022.
  100. Y. Venkatesha, Y. Kim, H. Park, Y. Li, and P. Panda, “Addressing client drift in federated continual learning with adaptive optimization,” Available at SSRN 4188586, 2022.
  101. P. Morgado and N. Vasconcelos, “Nettailor: Tuning the architecture, not just the weights,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3044–3054.
  102. J. Mori, I. Teranishi, and R. Furukawa, “Continual horizontal federated learning for heterogeneous data,” in 2022 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2022, pp. 1–8.
  103. J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” Advances in neural information processing systems, vol. 30, 2017.
  104. S. M. Hendryx, D. R. KC, B. Walls, and C. T. Morrison, “Federated reconnaissance: Efficient, distributed, class-incremental learning,” arXiv preprint arXiv:2109.00150, 2021.
  105. J. Dong, Y. Cong, G. Sun, Y. Zhang, B. Schiele, and D. Dai, “No one left behind: Real-world federated class-incremental learning,” arXiv preprint arXiv:2302.00903, 2023.
  106. J. Dong, L. Wang, Z. Fang, G. Sun, S. Xu, X. Wang, and Q. Zhu, “Federated class-incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 164–10 173.
  107. A. Li, J. Sun, X. Zeng, M. Zhang, H. Li, and Y. Chen, “Fedmask: Joint computation and communication-efficient personalized federated learning via heterogeneous masking,” in Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 2021, pp. 42–55.
  108. X. Yang, Y. Li, D. Meng, Y. Yang, D. Liu, and T. Li, “Three-way multi-granularity learning towards open topic classification,” Information Sciences, vol. 585, pp. 41–57, 2022.
  109. J. Schwarz, W. Czarnecki, J. Luketina, A. Grabska-Barwinska, Y. W. Teh, R. Pascanu, and R. Hadsell, “Progress & compress: A scalable framework for continual learning,” in International conference on machine learning.   PMLR, 2018, pp. 4528–4537.
  110. G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” stat, vol. 1050, p. 9, 2015.
  111. A. Usmanova, F. Portet, P. Lalanda, and G. Vega, “A distillation-based approach integrating continual learning and federated learning for pervasive services,” in 3rd Workshop on Continual and Multimodal Learning for Internet of Things–Co-located with IJCAI 2021, 2021.
  112. W. Huang, M. Ye, and B. Du, “Learn from others and be yourself in heterogeneous federated learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10 143–10 153.
  113. X. Zhang, Y. Kang, K. Chen, L. Fan, and Q. Yang, “Trading off privacy, utility and efficiency in federated learning,” ACM Transactions on Intelligent Systems and Technology, 2022.
  114. X. Zhang, H. Gu, L. Fan, K. Chen, and Q. Yang, “No free lunch theorem for security and utility in federated learning,” ACM Transactions on Intelligent Systems and Technology, vol. 14, no. 1, pp. 1–35, 2022.
  115. Y. Zhang, D. Zeng, J. Luo, Z. Xu, and I. King, “A survey of trustworthy federated learning with perspectives on security, robustness and privacy,” in Companion Proceedings of the ACM Web Conference 2023, 2023, pp. 1167–1176.
  116. Y. Kang, H. Gu, X. Tang, Y. He, Y. Zhang, J. He, Y. Han, L. Fan, and Q. Yang, “Optimizing privacy, utility and efficiency in constrained multi-objective federated learning,” arXiv preprint arXiv:2305.00312, 2023.
  117. Y. Yao, “Three-way decision: an interpretation of rules in rough set theory,” in Rough Sets and Knowledge Technology: 4th International Conference, RSKT 2009, Gold Coast, Australia, July 14-16, 2009. Proceedings 4.   Springer, 2009, pp. 642–649.
  118. X. Yang, T. Li, H. Fujita, D. Liu, and Y. Yao, “A unified model of sequential three-way decisions and multilevel incremental processing,” Knowledge-Based Systems, vol. 134, pp. 172–188, 2017.
  119. X. Yang, T. Li, H. Fujita, and D. Liu, “A sequential three-way approach to multi-class decision,” International Journal of Approximate Reasoning, vol. 104, pp. 108–125, 2019.
  120. D. Liu, X. Yang, and T. Li, “Three-way decisions: beyond rough sets and granular computing,” International Journal of Machine Learning and Cybernetics, vol. 11, pp. 989–1002, 2020.
  121. H. Li, H. Yu, F. Min, D. Liu, and H. Li, “Incremental sequential three-way decision based on continual learning network,” International Journal of Machine Learning and Cybernetics, pp. 1–13, 2022.
  122. S. Cui, J. Liang, W. Pan, K. Chen, C. Zhang, and F. Wang, “Collaboration equilibrium in federated learning,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 241–251.
  123. L. Zhang, T. Zhu, P. Xiong, W. Zhou, and P. S. Yu, “A robust game-theoretical federated learning framework with joint differential privacy,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 4, pp. 3333–3346, 2023.
  124. L. Zhang, T. Zhu, P. Xiong, W. Zhou, and S. Y. Philip, “A game-theoretic federated learning framework for data quality improvement,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  125. K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečnỳ, S. Mazzocchi, B. McMahan et al., “Towards federated learning at scale: System design,” Proceedings of machine learning and systems, vol. 1, pp. 374–388, 2019.
  126. E. Verwimp, M. De Lange, and T. Tuytelaars, “Rehearsal revealed: The limits and merits of revisiting samples in continual learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9385–9394.
  127. W. Huang, J. Liu, T. Li, T. Huang, S. Ji, and J. Wan, “Feddsr: Daily schedule recommendation in a federated deep reinforcement learning framework,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  128. P. Zhou, K. Wang, L. Guo, S. Gong, and B. Zheng, “A privacy-preserving distributed contextual federated online learning framework with big data support in social recommender systems,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 3, pp. 824–838, 2019.
  129. G. Cai, J. Zhu, Q. Dai, Z. Dong, X. He, R. Tang, and R. Zhang, “Reloop: A self-correction continual learning loop for recommender systems,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2692–2697.
  130. B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–40, 2023.
  131. H. L. Xin’ao Wang, K. Chen, and L. Shou, “Fedbfpt: An efficient federated learning framework for bert further pre-training,” pp. 770–778, 2023.
  132. Y. Tian, Y. Wan, L. Lyu, D. Yao, H. Jin, and L. Sun, “Fedbert: When federated learning meets pre-training,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1–26, 2022.
  133. Z. Wang, Z. Zhang, C.-Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. Dy, and T. Pfister, “Learning to prompt for continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 139–149.
  134. Z. Ke, H. Lin, Y. Shao, H. Xu, L. Shu, and B. Liu, “Continual training of language models for few-shot learning,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 10 205–10 216.
  135. H. Xu, B. Liu, L. Shu, and P. Yu, “Bert post-training for review reading comprehension and aspect-based sentiment analysis,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, 2019.
  136. Z. Ke, B. Liu, N. Ma, H. Xu, and L. Shu, “Achieving forgetting prevention and knowledge transfer in continual learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 22 443–22 456, 2021.
  137. Z. Ke and B. Liu, “Continual learning of natural language processing tasks: A survey,” arXiv preprint arXiv:2211.12701, 2022.
  138. L. Sun, J. Wu, Y. Xu, and Y. Zhang, “A federated learning and blockchain framework for physiological signal classification based on continual learning,” Information Sciences, 2023.
  139. M. Schreyer, H. Hemati, D. Borth, and M. A. Vasarhelyi, “Federated continual learning to detect accounting anomalies in financial auditing,” IEEE Transactions on Cybernetics, vol. 43, pp. 01–11, 2022.
  140. C. Lanza, E. Angelats, M. Miozzo, and P. Dini, “Urban traffic forecasting using federated and continual learning,” in 2023 6th Conference on Cloud and Internet of Things (CIoT).   IEEE, 2023, pp. 1–8.
Citations (18)

Summary

We haven't generated a summary for this paper yet.