Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decentralized Learning over Wireless Networks with Broadcast-Based Subgraph Sampling (2310.16106v1)

Published 24 Oct 2023 in cs.LG, cs.DC, cs.IT, cs.SY, eess.SY, and math.IT

Abstract: This work centers on the communication aspects of decentralized learning over wireless networks, using consensus-based decentralized stochastic gradient descent (D-SGD). Considering the actual communication cost or delay caused by in-network information exchange in an iterative process, our goal is to achieve fast convergence of the algorithm measured by improvement per transmission slot. We propose BASS, an efficient communication framework for D-SGD over wireless networks with broadcast transmission and probabilistic subgraph sampling. In each iteration, we activate multiple subsets of non-interfering nodes to broadcast model updates to their neighbors. These subsets are randomly activated over time, with probabilities reflecting their importance in network connectivity and subject to a communication cost constraint (e.g., the average number of transmission slots per iteration). During the consensus update step, only bi-directional links are effectively preserved to maintain communication symmetry. In comparison to existing link-based scheduling methods, the inherent broadcasting nature of wireless channels offers intrinsic advantages in speeding up convergence of decentralized learning by creating more communicated links with the same number of transmission slots.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (17)
  1. P. Kairouz et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.
  2. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  3. A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. Stich, “A unified theory of decentralized SGD with changing topology and local updates,” in International Conference on Machine Learning.   PMLR, 2020, pp. 5381–5393.
  4. B. Swenson, R. Murray, S. Kar, and H. V. Poor, “Distributed stochastic gradient descent: Nonconvexity, nonsmoothness, and convergence to local minima,” arXiv preprint arXiv:2003.02818, 2020.
  5. J. Wang and G. Joshi, “Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms,” The Journal of Machine Learning Research, vol. 22, no. 1, pp. 9709–9758, 2021.
  6. X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” Advances in neural information processing systems, vol. 30, 2017.
  7. D. Jakovetic, D. Bajovic, A. K. Sahu, and S. Kar, “Convergence rates for distributed stochastic optimization over random networks,” in IEEE Conference on Decision and Control (CDC), 2018, pp. 4238–4245.
  8. J. Wang, A. K. Sahu, G. Joshi, and S. Kar, “MATCHA: A matching-based link scheduling strategy to speed up distributed optimization,” IEEE Transactions on Signal Processing, vol. 70, pp. 5208–5221, 2022.
  9. A. Koloskova, T. Lin, S. U. Stich, and M. Jaggi, “Decentralized deep learning with arbitrary communication compression,” in Proceedings of the 8th International Conference on Learning Representations, 2019.
  10. H. Tang, S. Gan, C. Zhang, T. Zhang, and J. Liu, “Communication compression for decentralized training,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  11. J. Wang and G. Joshi, “Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD,” Proceedings of Machine Learning and Systems, vol. 1, pp. 212–229, 2019.
  12. N. H. Tran, W. Bao, A. Zomaya, M. N. Nguyen, and C. S. Hong, “Federated learning over wireless networks: Optimization model design and analysis,” in IEEE conference on computer communications, 2019, pp. 1387–1395.
  13. C.-C. Chiu, X. Zhang, T. He, S. Wang, and A. Swami, “Laplacian matrix sampling for communication-efficient decentralized learning,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 4, pp. 887–901, 2023.
  14. R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, 2007.
  15. G. Neglia, C. Xu, D. Towsley, and G. Calbi, “Decentralized gradient methods: does topology matter?” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2020, pp. 2348–2358.
  16. H. Xing, O. Simeone, and S. Bi, “Federated learning over wireless device-to-device networks: Algorithms and convergence analysis,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3723–3741, 2021.
  17. D. P. Herrera, Z. Chen, and E. G. Larsson, “Distributed consensus in wireless networks with probabilistic broadcast scheduling,” IEEE Signal Processing Letters, vol. 30, pp. 41–45, 2023.

Summary

We haven't generated a summary for this paper yet.