Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Decentralized Federated Learning in Energy and Latency Constrained Wireless Networks (2403.20075v1)

Published 29 Mar 2024 in cs.LG, cs.SY, and eess.SY

Abstract: In Federated Learning (FL), with parameter aggregated by a central node, the communication overhead is a substantial concern. To circumvent this limitation and alleviate the single point of failure within the FL framework, recent studies have introduced Decentralized Federated Learning (DFL) as a viable alternative. Considering the device heterogeneity, and energy cost associated with parameter aggregation, in this paper, the problem on how to efficiently leverage the limited resources available to enhance the model performance is investigated. Specifically, we formulate a problem that minimizes the loss function of DFL while considering energy and latency constraints. The proposed solution involves optimizing the number of local training rounds across diverse devices with varying resource budgets. To make this problem tractable, we first analyze the convergence of DFL with edge devices with different rounds of local training. The derived convergence bound reveals the impact of the rounds of local training on the model performance. Then, based on the derived bound, the closed-form solutions of rounds of local training in different devices are obtained. Meanwhile, since the solutions require the energy cost of aggregation as low as possible, we modify different graph-based aggregation schemes to solve this energy consumption minimization problem, which can be applied to different communication scenarios. Finally, a DFL framework which jointly considers the optimized rounds of local training and the energy-saving aggregation scheme is proposed. Simulation results show that, the proposed algorithm achieves a better performance than the conventional schemes with fixed rounds of local training, and consumes less energy than other traditional aggregation schemes.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. B. Gu et al., “Breaking the Interference and Fading Gridlock in Backscatter Communications: State-of-the-Art, Design Challenges, and Future Directions,” arXiv preprint arXiv:2308.16031, 2023.
  2. W. Liu et al., “Joint Trajectory and Scheduling Optimization for Age of Synchronization Minimization in UAV-Assisted Networks With Random Updates,” IEEE Trans. on Commun., vol. 71, no. 11, pp. 6633–6646, 2023.
  3. Y. LeCun, Y. Bengio, et al., “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  4. Y. Sun et al., “Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues,” IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3072–3108, 2019.
  5. B. McMahan et al., “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc. 20th Int. Conf. Artif. Intell. Stat., vol. 54, pp. 1273–1282, 2017.
  6. K. Bonawitz et al., “Towards federated learning at scale: System design,” Proceedings of machine learning and systems, vol. 1, pp. 374–388, 2019.
  7. J. Park et al., “Wireless network intelligence at the edge,” Proc. IEEE, vol. 107, no. 11, pp. 2204–2239, 2019.
  8. G. Zhu et al., “Toward an Intelligent Edge: Wireless Communication Meets Machine Learning,” IEEE Commun. Mag., vol. 58, no. 1, pp. 19–25, 2020.
  9. T. Li et al., “Federated Learning: Challenges, Methods, and Future Directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, 2020.
  10. W. Y. B. Lim et al., “Federated Learning in Mobile Edge Networks: A Comprehensive Survey,” IEEE Commun. Surveys Tuts., vol. 22, no. 3, pp. 2031–2063, 2020.
  11. Y. Li et al., “Joint Optimal Quantization and Aggregation of Federated Learning Scheme in VANETs,” IEEE Trans. Intell. Transp. Syst., pp. 1–12, 2022.
  12. Y. Wang et al., “Quantized Federated Learning Under Transmission Delay and Outage Constraints,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 323–341, 2022.
  13. Z. Yan et al., “Latency-Efficient Wireless Federated Learning With Quantization and Scheduling,” IEEE Commun. Lett., vol. 26, no. 11, pp. 2621–2625, 2022.
  14. H. H. Yang et al., “Scheduling policies for federated learning in wireless networks,” IEEE Trans. Commun., vol. 68, no. 1, pp. 317–333, 2020.
  15. H. H. Yang et al., “Age-based scheduling policy for federated learning in mobile edge networks,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), pp. 8743–8747, 2020.
  16. W. Shi et al., “Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 453–467, 2021.
  17. Z. Yang et al., “Energy Efficient Federated Learning Over Wireless Communication Networks,” IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1935–1949, 2021.
  18. C. T. Dinh et al., “Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation,” IEEE/ACM Trans. Netw., vol. 29, no. 1, pp. 398–409, 2021.
  19. Z. Yan et al., “Accuracy-Security Tradeoff with Balanced Aggregation and Artificial Noise for Wireless Federated Learning,” IEEE Internet Things J., pp. 1–1, 2023.
  20. H. Gao et al., “When Decentralized Optimization Meets Federated Learning,” IEEE Network, pp. 1–7, 2023.
  21. Beltrán et al., “Decentralized Federated Learning: Fundamentals, State-of-the-art, Frameworks, Trends, and Challenges,” arXiv preprint arXiv:2211.08413, 2022.
  22. N. Onoszko et al., “Decentralized federated learning of deep neural networks on non-iid data,” in International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML (FL-ICML), 2021.
  23. I. Hegedűs et al., “Decentralized recommendation based on matrix factorization: A comparison of gossip and federated learning,” in Machine Learning and Knowledge Discovery in Databases: International Workshops, pp. 317–332, Springer, 2020.
  24. M. A. Dinani et al., “Gossip Learning of Personalized Models for Vehicle Trajectory Prediction,” in 2021 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), pp. 1–7, 2021.
  25. M. Du et al., “Decentralized Federated Learning with Markov Chain Based Consensus for Industrial IoT Networks,” IEEE Trans Ind. Informat., pp. 1–10, 2022.
  26. C. Wu et al., “DSFL: Decentralized Satellite Federated Learning for Energy-Aware LEO Constellation Computing,” in 2022 IEEE International Conference on Satellite Computing (Satellite), pp. 25–30, 2022.
  27. C. Che et al., “A Decentralized Federated Learning Framework via Committee Mechanism With Convergence Guarantee,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 12, pp. 4783–4800, 2022.
  28. S. Chen et al., “Privacy-Enhanced Decentralized Federated Learning at Dynamic Edge,” IEEE Trans. on Comput., pp. 1–14, 2023.
  29. S. Chen et al., “Decentralized Wireless Federated Learning With Differential Privacy,” IEEE Trans Ind. Informat., vol. 18, no. 9, pp. 6273–6282, 2022.
  30. H. Ye et al., “Decentralized Federated Learning With Unreliable Communications,” IEEE J. Sel. Topics Signal Process., vol. 16, no. 3, pp. 487–500, 2022.
  31. Z. Yan et al., “Performance Analysis for Resource Constrained Decentralized Federated Learning Over Wireless Networks,” IEEE Trans. Commun., pp. 1–1, 2024.
  32. W. Liu et al., “Decentralized federated learning: Balancing communication and computing costs,” IEEE Trans. Signal Inf. Process. Netw., vol. 8, pp. 131–143, 2022.
  33. Y. Liao et al., “Adaptive Configuration for Heterogeneous Participants in Decentralized Federated Learning,” in IEEE INFOCOM 2023 - IEEE Conference on Computer Communications, pp. 1–10, 2023.
  34. H. Xing et al., “Federated Learning Over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3723–3741, 2021.
  35. Z. Tang et al., “GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication,” IEEE Trans. Parallel Distrib. Syst., vol. 34, no. 3, pp. 909–922, 2023.
  36. D. Nickel et al., “Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity,” in IEEE International Conference on Communications Workshops (ICC Workshops), pp. 43–48, 2022.
  37. D. Zeng et al., “Tackling Hybrid Heterogeneity on Federated Optimization via Gradient Diversity Maximization,” arXiv preprint arXiv:2310.02702, 2023.
  38. Z. Allen-Zhu, “Natasha 2: Faster non-convex optimization than SGD,” Advances in neural information processing systems, vol. 31, 2018.
  39. J. B. Kruskal, “On the shortest spanning subtree of a graph and the traveling salesman problem,” Proceedings of the American Mathematical society, vol. 7, no. 1, pp. 48–50, 1956.
  40. S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE Trans. on Inf. Theory, vol. 52, no. 6, pp. 2508–2530, 2006.
  41. I. Hegedűs et al., “Decentralized learning works: An empirical comparison of gossip learning and federated learning,” Journal of Parallel and Distributed Computing, vol. 148, pp. 109–124, 2021.
  42. D. Li, “Bound Analysis of Number Configuration for Reflecting Elements in IRS-Assisted D2D Communications,” IEEE Wireless Commun. Lett., vol. 11, no. 10, pp. 2220–2224, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Zhigang Yan (7 papers)
  2. Dong Li (429 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets