Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FedGreen: Carbon-aware Federated Learning with Model Size Adaptation (2404.15503v1)

Published 23 Apr 2024 in cs.LG, cs.AI, and cs.DC

Abstract: Federated learning (FL) provides a promising collaborative framework to build a model from distributed clients, and this work investigates the carbon emission of the FL process. Cloud and edge servers hosting FL clients may exhibit diverse carbon footprints influenced by their geographical locations with varying power sources, offering opportunities to reduce carbon emissions by training local models with adaptive computations and communications. In this paper, we propose FedGreen, a carbon-aware FL approach to efficiently train models by adopting adaptive model sizes shared with clients based on their carbon profiles and locations using ordered dropout as a model compression technique. We theoretically analyze the trade-offs between the produced carbon emissions and the convergence accuracy, considering the carbon intensity discrepancy across countries to choose the parameters optimally. Empirical studies show that FedGreen can substantially reduce the carbon footprints of FL compared to the state-of-the-art while maintaining competitive model accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. X. Wu, Z. Tian, and J. Guo, “A review of the theoretical research and practical progress of carbon neutrality,” Sustainable Operations and Computers, vol. 3, pp. 54–66, 2022.
  2. X. Qiu, T. Parcollet, J. Fernandez-Marques, P. P. B. de Gusmao, Y. Gao, D. J. Beutel, T. Topal, A. Mathur, and N. D. Lane, “A first look into the carbon footprint of federated learning,” arXiv preprint:2102.07627, 2021.
  3. S. Horvath, S. Laskaridis, M. Almeida, I. Leontiadis, S. Venieris, and N. Lane, “Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout,” Advances in Neural Information Processing Systems, vol. 34, pp. 12 876–12 889, 2021.
  4. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  5. E. Diao, J. Ding, and V. Tarokh, “Heterofl: Computation and communication efficient federated learning for heterogeneous clients,” arXiv preprint: 2010.01264, 2020.
  6. Y. Mei, P. Guo, M. Zhou, and V. Patel, “Resource-adaptive federated learning with all-in-one neural composition,” Advances in Neural Information Processing Systems, vol. 35, pp. 4270–4284, 2022.
  7. Y. Jiang, S. Wang, V. Valls, B. J. Ko, W.-H. Lee, K. K. Leung, and L. Tassiulas, “Model pruning enables efficient federated learning on edge devices,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  8. Y. Shi, X. Li, and S. Chen, “Towards smart and efficient service systems: Computational layered federated learning framework,” IEEE Network, 2023.
  9. P. Li, G. Cheng, X. Huang, J. Kang, R. Yu, Y. Wu, and M. Pan, “Anycostfl: Efficient on-demand federated learning over heterogeneous edge devices,” arXiv preprint arXiv:2301.03062, 2023.
  10. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
  11. J. Yu, L. Yang, N. Xu, J. Yang, and T. Huang, “Slimmable neural networks,” arXiv preprint arXiv:1812.08928, 2018.
  12. G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN).   IEEE, 2017, pp. 2921–2926.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ali Abbasi (21 papers)
  2. Fan Dong (6 papers)
  3. Xin Wang (1307 papers)
  4. Henry Leung (19 papers)
  5. Jiayu Zhou (70 papers)
  6. Steve Drew (21 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com