Towards cost-effective and resource-aware aggregation at Edge for Federated Learning
Abstract: Federated Learning (FL) is a machine learning approach that addresses privacy and data transfer costs by computing data at the source. It's particularly popular for Edge and IoT applications where the aggregator server of FL is in resource-capped edge data centers for reducing communication costs. Existing cloud-based aggregator solutions are resource-inefficient and expensive at the Edge, leading to low scalability and high latency. To address these challenges, this study compares prior and new aggregation methodologies under the changing demands of IoT and Edge applications. This work is the first to propose an adaptive FL aggregator at the Edge, enabling users to manage the cost and efficiency trade-off. An extensive comparative analysis demonstrates that the design improves scalability by up to 4X, time efficiency by 8X, and reduces costs by more than 2X compared to extant cloud-based static methodologies.
- M. Abadi et al., “Tensorflow: a system for large-scale machine learning.” in OSDI, 2016.
- T. Chilimbi et al., “Project adam: Building an efficient and scalable deep learning training system,” in OSDI, 2014.
- H. Ali et al., “Acadia: Efficient and robust adversarial attacks against deep reinforcement learning,” in IEEE CNS, 2022.
- A. Act, “Health insurance portability and accountability act of 1996,” Public law, vol. 104, p. 191, 1996.
- G. D. P. Regulation, “General data protection regulation (gdpr),” Intersoft Consulting, Accessed in October, vol. 24, no. 1, 2018.
- B. McMahan et al., “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, 2017.
- A. Maroli et al., “Applications of iot for achieving sustainability in agricultural sector: A comprehensive review,” Journal of Environmental Management, vol. 298, p. 113488, 2021.
- K. S. Arikumar et al., “Fl-pmi: Federated learning-based person movement identification through wearable devices in smart healthcare systems,” Sensors, vol. 22, 2022.
- M. Abdel-Basset et al., “Federated intrusion detection in blockchain-based smart transportation systems,” IEEE Transactions on Intelligent Transportation Systems, pp. 2523–2537, 2022.
- T. Zhang et al., “Federated learning for the internet of things: Applications, challenges, and opportunities,” IEEE Internet of Things Magazine, pp. 24–29, 2022.
- K. A. Bonawitz et al., “Towards federated learning at scale: System design,” in SysML 2019, 2019.
- F. Lai et al., “Oort: Efficient federated learning via guided participant selection,” in 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), 2021, pp. 19–35.
- P. Kairouz et al., “Advances and open problems in federated learning,” 2019. [Online]. Available: https://arxiv.org/abs/1912.04977
- Gartner et al., “Predicts 2022: The distributed enterprise drives computing to the edge,” www.gartner.com/document/4007176, 2022.
- J. Kone et al., “Federated learning: Strategies for improving communication efficiency,” CoRR, 2016.
- M. Ekmefjord et al., “Scalable federated machine learning with fedn,” arXiv, 2021.
- A. Grafberger et al., “Fedless: Secure and scalable federated learning using serverless computing,” in IEEE Big Data, 2021, pp. 164–173.
- K. R. Jayaram et al., “λ𝜆\lambdaitalic_λ-fl : Serverless aggregation for federated learning,” 2022.
- G. A. Reina et al., “Openfl: An open-source framework for federated learning,” 2021.
- H. Ludwig et al., “Ibm federated learning: an enterprise framework white paper v0. 1,” 2020.
- M. Abadi, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015. [Online]. Available: www.tensorflow.org/
- G. Premsankar et al., “Edge computing for the internet of things: A case study,” IEEE IoT-J, pp. 1275–1284, 2018.
- I. Thangakrishnan et al., “Herring: Rethinking the parameter server at scale for the cloud,” in SC, 2020.
- L. Liu et al., “Client-edge-cloud hierarchical federated learning,” in ICC, 2020, pp. 1–6.
- M. Zaharia et al., “Spark: Cluster computing with working sets,” in USENIX Conference on Hot Topics in Cloud Computing, 2010.
- D. Yin et al., “Byzantine-robust distributed learning: Towards optimal statistical rates,” in ICML, 2018.
- D. Dimitriadis et al., “Flute: A scalable, extensible framework for high-performance federated learning simulations,” 2022.
- M. Aledhari et al., “Federated learning: A survey on enabling technologies, protocols, and applications,” IEEE Access, 2020.
- A. F. Khan et al., “Pi-fl: Personalized and incentivized federated learning,” arXiv, 2023.
- J. Han et al., “Tiff: Tokenized incentive for federated learning,” in IEEE CLOUD, 2022, pp. 407–416.
- M. M. Hossain et al., “Towards an analysis of security issues, challenges, and open problems in the internet of things,” in 2015 IEEE World Congress on Services, 2015.
- H. Ali et al., “A survey on attacks and their countermeasures in deep learning: Applications in deep neural networks, federated, transfer, and deep reinforcement learning,” IEEE Access, vol. 11, 2023.
- J. Han et al., “Heterogeneity-aware adaptive federated learning scheduling,” in IEEE Big Data, 2022, pp. 911–920.
- Y. Jiang et al., “Model pruning enables efficient federated learning on edge devices,” IEEE TNLS, 2022.
- A. Albasyoni et al., “Optimal gradient compression for distributed and federated learning,” CoRR, vol. abs/2010.03246, 2020.
- S. Zheng et al., “Design and analysis of uplink and downlink communications for federated learning,” IEEE Journal on Selected Areas in Communications, vol. 39, pp. 2150–2167, 2021.
- S. Yu et al., “Spatl: Salient parameter aggregation and transfer learning for heterogeneous clients in federated learning,” 2021.
- J. Mace et al., “2dfq: Two-dimensional fair queuing for multi-tenant cloud services,” in SIGCOMM, 2016.
- B. P. Rimal and M. Maier, “Workflow scheduling in multi-tenant cloud computing environments,” IEEE TPDS, 2017.
- J. Pan and J. McElhannon, “Future edge cloud and edge computing for internet of things applications,” IEEE IoT-J, 2018 pages=439-449,.
- S. Dey et al., “Challenges of using edge devices in iot computation grids,” in ICPADS, 2013.
- J. Kang et al., “Reliable federated learning for mobile networks,” IEEE Wireless Communications, 2020.
- S. Khalid and C. Brown, “Software engineering approaches adopted by blockchain developers,” in IEEE SDS, 2023.
- K. He et al., “Deep residual learning for image recognition,” 2015.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014.
- V. K. Vavilapalli et al., “Apache hadoop yarn: yet another resource negotiator,” SOCC, 2013.
- C. Xie et al., “Zeno: Byzantine-suspicious stochastic gradient descent,” CoRR, vol. abs/1805.10032, 2018.
- C. R. Harris et al., “Array programming with NumPy,” Nature, vol. 585, pp. 357–362, 2020.
- S. K. Lam et al., “Numba: A llvm-based python jit compiler,” in Workshop on the LLVM Compiler Infrastructure in HPC, 2015.
- D. J. Beutel et al., “Flower: A friendly federated learning research framework,” 2022.
- Y. Niu et al., “Federated learning of large models at the edge via principal sub-model training,” in Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NIPS), 2022.
- K. Sudhakar, “Amazon web services (aws) glue,” International Journal of Management, IT and Engineering, vol. 8, pp. 108–122, 2018.
- T.-C. Chiu et al., “Semisupervised distributed learning with non-iid data for aiot service platform,” IEEE IoT-J, pp. 9266–9277, 2020.
- “AWS Lambda@Edge,” https://aws.amazon.com/lambda/edge/.
- Microsoft, “Azure iot edge,” https://github.com/Azure/iotedge, 2021.
- Baidu, “Openedge,” https://github.com/baidu/openedge, 2021.
- “Kubernetes Manual,” 2017.
- D. Huba et al., “Papaya: Practical, private, and scalable federated learning,” in Proceedings of Machine Learning and Systems, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.