Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging The Edge-to-Cloud Continuum for Scalable Machine Learning on Decentralized Data (2306.10848v1)

Published 19 Jun 2023 in cs.LG and cs.DC

Abstract: With mobile, IoT and sensor devices becoming pervasive in our life and recent advances in Edge Computational Intelligence (e.g., Edge AI/ML), it became evident that the traditional methods for training AI/ML models are becoming obsolete, especially with the growing concerns over privacy and security. This work tries to highlight the key challenges that prohibit Edge AI/ML from seeing wide-range adoption in different sectors, especially for large-scale scenarios. Therefore, we focus on the main challenges acting as adoption barriers for the existing methods and propose a design with a drastic shift from the current ill-suited approaches. The new design is envisioned to be model-centric in which the trained models are treated as a commodity driving the exchange dynamics of collaborative learning in decentralized settings. It is expected that this design will provide a decentralized framework for efficient collaborative learning at scale.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. Nitin, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al., “Advances and open problems in federated learning,” Foundations and Trends in Machine Learning, vol. 14, pp. 1–210, 2021.
  2. K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y. A. Zhang, “The roadmap to 6g: Ai empowered wireless networks,” IEEE Communications Magazine, vol. 57, no. 8, pp. 84–90, 2019.
  3. T. Zhang, L. Gao, C. He, M. Zhang, B. Krishnamachari, and A. S. Avestimehr, “Federated learning for the internet of things: Applications, challenges, and opportunities,” IEEE Internet of Things Magazine, vol. 5, no. 1, pp. 24–29, 2022.
  4. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, 2017.
  5. P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv 1812.00564, 2018.
  6. H. Daga, P. K. Nicholson, A. Gavrilovska, and D. Lugones, “Cartel: A system for collaborative transfer learning at the edge,” in ACM SoCC, 2019.
  7. A. M. Abdelmoniem, C.-Y. Ho, P. Papageorgiou, and M. Canini, “Empirical analysis of federated learning in heterogeneous environments,” in ACM EuroMLSys, 2022.
  8. A. M. Abdelmoniem, A. N. Sahu, M. Canini, and S. A. Fahmy, “REFL: Resource-efficient federated learning,” ACM EuroSys, 2023.
  9. R. Metz, “Zillow’s home-buying debacle shows how hard it is to use ai to value real estate,” 2021. https://edition.cnn.com/2021/11/09/tech/zillow-ibuying-home-zestimate/index.html.
  10. K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečný, S. Mazzocchi, H. B. McMahan, T. V. Overveldt, D. Petrou, D. Ramage, and J. Roselander, “Towards Federated Learning at Scale: System Design,” in MLSys, 2019.
  11. Y. Liu, Y. Kang, C. Xing, T. Chen, and Q. Yang, “A secure federated transfer learning framework,” IEEE Intelligent Systems, vol. 35, no. 4, pp. 70–82, 2020.
  12. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. on Knowl. and Data Eng., 2010.
  13. S. Cui, J. Liang, W. Pan, K. Chen, C. Zhang, and F. Wang, “Collaboration equilibrium in federated learning,” in ACM SIGKDD, 2022.
  14. A. M. Abdelmoniem, C.-Y. Ho, P. Papageorgiou, and M. Canini, “A comprehensive empirical study of heterogeneity in federated learning,” IEEE Internet of Things Journal, pp. 1–1, 2023.
  15. C. Renggli, A. S. Pinto, L. Rimanic, J. Puigcerver, C. Riquelme, C. Zhang, and M. Lucic, “Which model to transfer? finding the needle in the growing haystack,” in IEEE CVPR, 2022.
  16. X. Lan, X. Zhu, and S. Gong, “Knowledge distillation by on-the-fly native ensemble,” in NeurIPS, 2018.
  17. C. Yang, Q. Wang, M. Xu, Z. Chen, K. Bian, Y. Liu, and X. Liu, “Characterizing impacts of heterogeneity in federated learning upon large-scale smartphone data,” in The Web Conference, 2021.
  18. N. H. M. Evans and B. Peacock, “Statistical distributions, second edition,” Applied Stochastic Models and Data Analysis, vol. 10, no. 4, pp. 297–297, 1994.
  19. T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied Federated Learning: Improving Google Keyboard Query Suggestions,” arXiv 1812.02903, 2018.
  20. A. M. Abdelmoniem and M. Canini, “Towards mitigating device heterogeneity in federated learning via adaptive model quantization,” in ACM EuroMLSys, 2021.
  21. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in MLSys, 2020.
  22. F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “Efficient Federated Learning via Guided Participant Selection,” in USENIX OSDI, 2021.
  23. A. Arouj and A. M. Abdelmoniem, “Towards energy-aware federated learning on battery-powered clients,” in ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network (FedEdge), MobiCom, 2022.
  24. R. R. Gajjala, S. Banchhor, A. M. Abdelmoniem, A. Dutta, M. Canini, and P. Kalnis, “Huffman coding based encoding techniques for fast distributed deep learning,” in Workshop on Distributed Machine Learning - CoNext (DistributedML), 2020.
  25. H. Xu, C.-Y. Ho, A. M. Abdelmoniem, A. Dutta, E. H. Bergou, K. Karatsenidis, M. Canini, and P. Kalnis, “Grace: A compressed communication framework for distributed machine learning,” in IEEE ICDCS, 2021.
  26. A. M. Abdelmoniem, A. Elzanaty, M.-S. Alouini, and M. Canini, “An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems,” in MLSys, 2021.
  27. A. M. Abdelmoniem, A. Elzanaty, M. Canini, and M.-S. Alouini, “Statistical-based gradient compression method for distributed training system,” US Patent 63045346, 2022.
  28. A. M. Abdelmoniem and M. Canini, “DC2: Delay-aware Compression Control for Distributed Machine Learning,” in IEEE INFOCOM, 2021.
  29. A. Sahu, A. Dutta, A. M. Abdelmoniem, T. Banerjee, M. Canini, and P. Kalnis, “Rethinking gradient sparsification as total error minimization,” in NeurIPS, 2021.
  30. L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in IEEE Symposium on Security and Privacy (SP), 2019.
  31. M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in IEEE Symposium on Security and Privacy (SP), 2019.
  32. A. M. Abdelmoniem, Y. M. Abdelmoniem and A. Elzanaty, “A2FL: Availability-aware selection for machine learning on clients with federated big data,” in IEEE ICC, 2023.
  33. D. Milojicic, “The Edge-to-Cloud Continuum” in IEEE Computer, vol. 53, no. 11, pp. 16-25, 2020.

Summary

We haven't generated a summary for this paper yet.