Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context-Aware Orchestration of Energy-Efficient Gossip Learning Schemes (2404.12023v1)

Published 18 Apr 2024 in cs.NI, cs.AI, and cs.DC

Abstract: Fully distributed learning schemes such as Gossip Learning (GL) are gaining momentum due to their scalability and effectiveness even in dynamic settings. However, they often imply a high utilization of communication and computing resources, whose energy footprint may jeopardize the learning process, particularly on battery-operated IoT devices. To address this issue, we present Optimized Gossip Learning (OGL)}, a distributed training approach based on the combination of GL with adaptive optimization of the learning process, which allows for achieving a target accuracy while minimizing the energy consumption of the learning process. We propose a data-driven approach to OGL management that relies on optimizing in real-time for each node the number of training epochs and the choice of which model to exchange with neighbors based on patterns of node contacts, models' quality, and available resources at each node. Our approach employs a DNN model for dynamic tuning of the aforementioned parameters, trained by an infrastructure-based orchestrator function. We performed our assessments on two different datasets, leveraging time-varying random graphs and a measurement-based dynamic urban scenario. Results suggest that our approach is highly efficient and effective in a broad spectrum of network scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. R. Ormándi et al., “Gossip learning with linear models on fully distributed data,” vol. 25, no. 4, pp. 556–571, 2013.
  2. M. A. Dinani, A. Holzer, H. Nguyen, M. A. Marsan, and G. Rizzo, “A gossip learning approach to urban trajectory nowcasting for anticipatory ran management,” IEEE TMC, pp. 1–17, 2023.
  3. Dinani, Mina Aghaei and Holzer, Adrian and Nguyen, Hung and Marsan, Marco Ajmone and Rizzo, Gianluca, “Gossip learning of personalized models for vehicle trajectory prediction,” in IEEE WCNC Workshop.   IEEE WCNC, 2021, pp. 1–7.
  4. Y. Zhou, Y. Qing, and J. Lv, “Communication-efficient federated learning with compensated overlap-fedavg,” 2021.
  5. A. M. Abdelmoniem, A. N. Sahu, M. Canini, and S. A. Fahmy, “REFL: Resource-efficient federated learning,” in Proceedings of the Eighteenth European Conference on Computer Systems.   ACM, may 2023.
  6. O. Marfoq, G. Neglia, L. Kameni, and R. Vidal, “Personalized federated learning through local memorization,” 2022.
  7. F. Malandrino and C. F. Chiasserini, “Federated learning at the network edge: When not all nodes are created equal,” 2021.
  8. H. Wu and P. Wang, “Fast-convergent federated learning with adaptive weighting,” 2021.
  9. S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” 2019.
  10. A. Imteaj and M. H. Amini, “Fedar: Activity and resource-aware federated learning model for distributed mobile robots,” 2021.
  11. M. A. Dinani, A. Holzer, H. Nguyen, M. A. Marsan, and G. Rizzo, “Vehicle position nowcasting with gossip learning,” in IEEE WCNC.   IEEE, 2022, pp. 728–733.
  12. C. Gomez, J. Oller, and J. Paradells, “Overview and evaluation of bluetooth low energy: An emerging low-power wireless technology,” sensors, vol. 12, no. 9, pp. 11 734–11 753, 2012.
  13. B. Nunes, M. Mendonca, X. Nguyen, K. Obraczka, and T. Turletti, “A survey of software-defined networking: Past, present, and future of programmable networks,” Communications Surveys & Tutorials, IEEE, vol. PP, no. 99, pp. 1–18, 2014.
  14. X. Tang, J. Li, K. Li, and A. Y. Zomaya, “Cpu–gpu utilization aware energy-efficient scheduling algorithm on heterogeneous computing systems,” IEEE Access, vol. 8, pp. 58 948–58 958, 2020.
  15. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998, conference Name: Proceedings of the IEEE.
  16. A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research).” [Online]. Available: http://www.cs.toronto.edu/~kriz/cifar.html
  17. L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions,” Journal of big Data, vol. 8, pp. 1–74, 2021.
  18. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics.   PMLR, 2017, pp. 1273–1282.
  19. A. Di Maio, M. A. Dinani, and G. Rizzo, “The upsides of turbulence: Baselining gossip learning in dynamic settings,” in Proceedings of the Twenty-Fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, ser. MobiHoc ’23.   New York, NY, USA: ACM, 2023, p. 376–381.
  20. K. Abboud, H. A. Omar, and W. Zhuang, “Interworking of dsrc and cellular network technologies for v2x communications: A survey,” IEEE Transactions on Vehicular Technology, vol. 65, no. 12, pp. 9457–9470, 2016.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com