How to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learning (2401.13236v2)
Abstract: Federated learning (FL) has attracted vivid attention as a privacy-preserving distributed learning framework. In this work, we focus on cross-silo FL, where clients become the model owners after training and are only concerned about the model's generalization performance on their local data. Due to the data heterogeneity issue, asking all the clients to join a single FL training process may result in model performance degradation. To investigate the effectiveness of collaboration, we first derive a generalization bound for each client when collaborating with others or when training independently. We show that the generalization performance of a client can be improved only by collaborating with other clients that have more training data and similar data distribution. Our analysis allows us to formulate a client utility maximization problem by partitioning clients into multiple collaborating groups. A hierarchical clustering-based collaborative training (HCCT) scheme is then proposed, which does not need to fix in advance the number of groups. We further analyze the convergence of HCCT for general non-convex loss functions which unveils the effect of data similarity among clients. Extensive simulations show that HCCT achieves better generalization performance than baseline schemes, whereas it degenerates to independent training and conventional FL in specific scenarios.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist. (AISTATS), Ft. Lauderdale, FL, USA, Apr. 2017, pp. 1273–1282.
- S. Wang, M. Chen, C. Yin, W. Saad, C. S. Hong, S. Cui, and H. V. Poor, “Federated learning for task and resource allocation in wireless high-altitude balloon networks,” IEEE Internet Things J., vol. 8, no. 24, pp. 17 460–17 475, 2021.
- W. Ni, J. Zheng, and H. Tian, “Semi-federated learning for collaborative intelligence in massive iot networks,” IEEE Internet Things J., vol. 10, no. 13, pp. 11 942 – 11 943, Jul. 2023.
- P. Kairouz et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, no. 1–2, pp. 1–210, 2021.
- Y. Sun, Y. Mao, and J. Zhang, “Mimic: Combating client dropouts in federated learning by mimicking central updates,” IEEE Transactions on mobile computing, to appear.
- C. Huang, J. Huang, and X. Liu, “Cross-silo federated learning: Challenges and opportunities,” [Online]. Available: https://arxiv.org/pdf/2206.12949.pdf.
- M. Tang and V. W. Wong, “An incentive mechanism for cross-silo federated learning: A public goods perspective,” in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM), Vancouver, BC, Canada, May 2021, pp. 1–10.
- C. Zhang, S. Li, J. Xia, W. Wang, F. Yan, and Y. Liu, “Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning,” in Proc. USENIX Annu. Tech. Conf. (USENIX ATC), Jul. 2020, pp. 493–506.
- Y. Huang et al., “Personalized cross-silo federated learning on non-IID data,” in Proc. AAAI Conf. Artif. Intell. (AAAI), Virtual Event, Feb. 2021, pp. 7865–7873.
- T. Yu, E. Bagdasaryan, and V. Shmatikov, “Salvaging federated learning by local adaptation,” [Online]. Available: https://arxiv.org/pdf/2002.04758.pdf.
- Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-IID data silos: An experimental study,” in Proc. IEEE 38th Int. Conf. Data Eng. (ICDE). Kuala Lumpur, Malaysia: IEEE, May 2022, pp. 965–978.
- Y. J. Cho, D. Jhunjhunwala, T. Li, V. Smith, and G. Joshi, “To federate or not to federate: Incentivizing client participation in federated learning,” [Online]. Available: https://arxiv.org/pdf/2205.14840.pdf.
- A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” in Proc. 34th Conf. Adv. Neural Inf. Process. Syst. (NeurIPS), Virtual Event, Dec. 2020, pp. 19 586–19 597.
- C. Li, G. Li, and P. K. Varshney, “Federated learning with soft clustering,” IEEE Internet Things J., vol. 9, no. 10, pp. 7773–7782, May 2021.
- Y. Kim, E. Al Hakim, J. Haraldson, H. Eriksson, J. M. B. da Silva, and C. Fischione, “Dynamic clustering in federated learning,” in Proc. IEEE Int. Conf. Commun. (ICC), Virtual Event, Jun. 2021, pp. 1–6.
- H. Yuan, W. Morningstar, L. Ning, and K. Singhal, “What do we mean by generalization in federated learning?” [Online]. Available: https://arxiv.org/pdf/2110.14216.pdf.
- K. Donahue and J. Kleinberg, “Optimality and stability in federated learning: A game-theoretic approach,” in Proc. 35th Conf. Adv. Neural Inf. Process. Syst. (NeurIPS), Virtual Event, Dec. 2021, pp. 1287–1298.
- ——, “Model-sharing games: Analyzing federated learning under voluntary participation,” in Proc. AAAI Conf. Artif. Intell. (AAAI), Virtual Event, Feb. 2021, pp. 5303–5311.
- A. Blum, N. Haghtalab, R. L. Phillips, and H. Shao, “One for one, or all for all: Equilibria and optimality of collaboration in federated learning,” in Proc. Int. Conf. Mach. Learn. (ICML), Virtual Event, Jul. 2021, pp. 1005–1014.
- C. Hasan, “Incentive mechanism design for federated learning: Hedonic game approach,” [Online]. Available: https://arxiv.org/pdf/2101.09673.pdf.
- G. Huang, X. Chen, T. Ouyang, Q. Ma, L. Chen, and J. Zhang, “Collaboration in participant-centric federated learning: A game-theoretical perspective,” IEEE Trans. Mob. Comput., pp. 1–16, to appear.
- A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Trans. Neural Networks Learn. Syst., to appear.
- M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” [Online]. Available: https://arxiv.org/pdf/1912.00818.pdf.
- A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach,” in Proc. 34th Conf. Adv. Neural Inf. Process. Syst. (NeurIPS), Virtual Event, Dec. 2020, pp. 3557–3568.
- W. Huang, T. Li, D. Wang, S. Du, and J. Zhang, “Fairness and accuracy in federated learning,” [Online]. Available: https://arxiv.org/pdf/2012.10069.pdf.
- M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in Proc. Int. Conf. Mach. Learn. (ICML). Long Beach, CA, USA: PMLR, Jun. 2019, pp. 4615–4625.
- M. Tang and V. W. Wong, “An incentive mechanism for cross-silo federated learning: A public goods perspective,” in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM). Vancouver, BC, Canada: IEEE, May 2021, pp. 1–10.
- Y. Deng et al., “Fair: Quality-aware federated learning with precise user incentive and model aggregation,” in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM). Vancouver, BC, Canada: IEEE, May 2021, pp. 1–10.
- T. H. Thi Le et al., “An incentive mechanism for federated learning in wireless cellular networks: An auction approach,” IEEE Trans. Wireless Commun., vol. 20, no. 8, pp. 4874–4887, Aug. 2021.
- Y. Zhan, J. Zhang, Z. Hong, L. Wu, P. Li, and S. Guo, “A survey of incentive mechanism design for federated learning,” IEEE Trans. Emerg. Top. Comput., vol. 10, no. 2, pp. 1035–1044, Apr.-Jun. 2021.
- P. M. Long and H. Sedghi, “Generalization bounds for deep convolutional neural networks,” in Proc. Int. Conf. Learn. Repr. (ICLR), LA, USA, May 2019.
- M. Duan, D. Liu, X. Chen, R. Liu, Y. Tan, and L. Liang, “Self-balancing federated learning with global imbalanced data in mobile systems,” IEEE Trans. Parallel Distributed Syst., vol. 32, no. 1, pp. 59–71, Jan. 2020.
- W. House, “Consumer data privacy in a networked world: A framework for protecting a privacy and promoting innovation in the global digital economy,” [Online]. Available: https://obamawhitehouse.archives.gov/sites/default/files/privacy-final.pdf.
- A. Krizhevsky et al., “Learning multiple layers of features from tiny images,” 2009.
- Y. Shi, J. Seely, P. Torr, N. Siddharth, A. Hannun, N. Usunier, and G. Synnaeve, “Gradient matching for domain generalization,” in Proc. Int. Conf. Learn. Repr. (ICLR), Virtual Event, May 2021.
- B. Zhao, K. R. Mopuri, and H. Bilen, “Dataset condensation with gradient matching,” in Proc. Int. Conf. Learn. Repr. (ICLR), Virtual Event, May 2020.
- J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proc. 5th Berkeley Symp. Math. Statist. Probability, Oakland, CA, USA, 1967, pp. 281–297.
- D. Arthur and S. Vassilvitskii, “K-means++: the advantages of careful seeding,” in Proc. Eighteenth Annu. ACM-SIAM Symp. Discret Algorithms (SODA), New Orleans, LA, USA, Jan. 2007, pp. 1027–1035.
- J. H. Ward Jr, “Hierarchical grouping to optimize an objective function,” J. Amer. Statistical Assoc., vol. 58, no. 301, pp. 236–244, 1963.
- A.-K. Großwendt, “Theoretical analysis of hierarchical clustering and the shadow vertex algorithm,” Ph.D. dissertation, Universitäts-und Landesbibliothek Bonn, 2020.
- M. Chen, H. V. Poor, W. Saad, and S. Cui, “Convergence time optimization for federated learning over wireless networks,” IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2457–2471, Apr. 2021.
- H. Xing, O. Simeone, and S. Bi, “Federated learning over wireless device-to-device networks: Algorithms and convergence analysis,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3723–3741, Dec. 2021.
- C. T. Dinh, N. H. Tran, M. N. H. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated learning over wireless networks: Convergence analysis and resource allocation,” IEEE/ACM Transactions on Networking, vol. 29, no. 1, pp. 398–409, Feb. 2021.
- Z. Lin, H. Liu, and Y.-J. A. Zhang, “CFLIT: Coexisting federated learning and information transfer,” IEEE Trans. Wireless Commun., to appear.
- S. Wang and M. Ji, “A unified analysis of federated learning with arbitrary client participation,” in Proc. 35th Conf. Adv. Neural Inf. Process. Syst. (NeurIPS), LA, USA, Nov. 2022.
- Y. Sun, J. Shao, Y. Mao, J. H. Wang, and J. Zhang, “Semi-decentralized federated edge learning for fast convergence on non-IID data,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Austin, TX, USA, Apr. 2022.
- J. Wang and G. Joshi, “Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms,” J. Mach. Learn. Res., vol. 22, no. 1, pp. 9709–9758, Jan. 2021.
- Y. Sun, Z. Lin, Y. Mao, S. Jin, and J. Zhang, “Channel and gradient-importance aware device scheduling for over-the-air federated learning,” IEEE Trans. Wireless Commun., 2023.
- P. Krishna, M. Kshitiz, M. Abdel-Rahman, R. Mike, S. Maziar, and X. Lin, “Federated learning with partial model personalization,” in Proc. Int. Conf. Mach. Learn. (ICML). Baltimore, MD, USA: PMLR, Jul. 2022, pp. 17 716–17 758.
- X. Li, M. JIANG, X. Zhang, M. Kamp, and Q. Dou, “Fedbn: Federated learning on non-IID features via local batch normalization,” in Proc. Int. Conf. Learn. Repr. (ICLR), Virtual Event, May 2020.
- Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” [Online]. Available: https://storage.googleapis.com/pub-tools-public-publication-data/pdf/37648.pdf.
- J. J. Hull, “A database for handwritten text recognition research,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 5, pp. 550–554, May 1994.
- Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proc. Int. Conf. Mach. Learn. (ICML). Lille, France: PMLR, Jul. 2015, pp. 1180–1189.
- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
- H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms,” [Online]. Available: https://arxiv.org/pdf/1708.07747.pdf.
- S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Mach. Learn., vol. 79, pp. 151–175, Oct. 2009.
- S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan, “Learnability, stability and uniform convergence,” J. Mach. Learn. Res., vol. 11, pp. 2635–2670, Dec. 2010.
- Yuchang Sun (17 papers)
- Marios Kountouris (121 papers)
- Jun Zhang (1008 papers)