FedNMUT -- Federated Noisy Model Update Tracking Convergence Analysis
Abstract: A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed that is tailored to function efficiently in the presence of noisy communication channels that reflect imperfect information exchange. This algorithm uses gradient tracking to minimize the impact of data heterogeneity while minimizing communication overhead. The proposed algorithm incorporates noise into its parameters to mimic the conditions of noisy communication channels, thereby enabling consensus among clients through a communication graph topology in such challenging environments. FedNMUT prioritizes parameter sharing and noise incorporation to increase the resilience of decentralized learning systems against noisy communications. Theoretical results for the smooth non-convex objective function are provided by us, and it is shown that the $\epsilon-$stationary solution is achieved by our algorithm at the rate of $\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$, where $T$ is the total number of communication rounds. Additionally, via empirical validation, we demonstrated that the performance of FedNMUT is superior to the existing state-of-the-art methods and conventional parameter-mixing approaches in dealing with imperfect information sharing. This proves the capability of the proposed algorithm to counteract the negative effects of communication noise in a decentralized learning framework.
- A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48–61, 2009.
- A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. Stich, “A unified theory of decentralized SGD with changing topology and local updates,” in International Conference on Machine Learning, pp. 5381–5393, PMLR, 2020.
- A. Hashemi, A. Acharya, R. Das, H. Vikalo, S. Sanghavi, and I. Dhillon, “On the benefits of multiple gossip steps in communication-constrained decentralized federated learning,” IEEE Trans. Parallel and Distributed Systems, vol. 33, no. 11, pp. 2727–2739, 2021.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, pp. 1273–1282, PMLR, 2017.
- J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
- V. P. Chellapandi, L. Yuan, S. H. Żak, and Z. Wang, “A survey of federated learning for connected and automated vehicles,” arXiv preprint arXiv:2303.10677, 2023.
- S. Pandya, G. Srivastava, R. Jhaveri, M. R. Babu, S. Bhattacharya, P. K. R. Maddikunta, S. Mastorakis, M. J. Piran, and T. R. Gadekallu, “Federated learning for smart cities: A comprehensive survey,” Sustainable Energy Technologies and Assessments, vol. 55, p. 102987, 2023.
- V. P. Chellapandi, L. Yuan, C. G. Brinton, S. H. Żak, and Z. Wang, “Federated learning for connected and automated vehicles: A survey of existing approaches and challenges,” IEEE Transactions on Intelligent Vehicles, 2023.
- L. Yuan, D.-J. Han, V. P. Chellapandi, S. H. Żak, and C. G. Brinton, “Fedmfs: Federated multimodal fusion learning with selective modality communication,” in IEEE International Conference on Communications, 2023.
- A. Nedić, A. Olshevsky, and M. G. Rabbat, “Network topology and communication-computation tradeoffs in decentralized optimization,” Proceedings of the IEEE, vol. 106, no. 5, pp. 953–976, 2018.
- J. N. Tsitsiklis, “Problems in decentralized decision making and computation.,” tech. rep., Massachusetts Inst of Tech Cambridge Lab for Information and Decision Systems, 1984.
- P. Di Lorenzo and G. Scutari, “Next: In-network nonconvex optimization,” IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 2, pp. 120–136, 2016.
- S. Pu and A. Nedić, “Distributed stochastic gradient tracking methods,” Mathematical Programming, vol. 187, no. 1, pp. 409–457, 2021.
- T. Lin, S. P. Karimireddy, S. U. Stich, and M. Jaggi, “Quasi-global momentum: Accelerating decentralized deep learning on heterogeneous data,” arXiv preprint arXiv:2102.04761, 2021.
- Y. Takezawa, H. Bao, K. Niwa, R. Sato, and M. Yamada, “Momentum tracking: Momentum acceleration for decentralized deep learning on heterogeneous data,” arXiv preprint arXiv:2209.15505, 2022.
- S. A. Aketi, A. Hashemi, and K. Roy, “Global update tracking: A decentralized learning algorithm for heterogeneous data,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- Y. Du, S. Yang, and K. Huang, “High-dimensional stochastic gradient quantization for communication-efficient edge learning,” IEEE transactions on signal processing, vol. 68, pp. 2128–2142, 2020.
- S. Zheng, C. Shen, and X. Chen, “Design and analysis of uplink and downlink communications for federated learning,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, pp. 2150–2167, 2020.
- Y. Chen, A. Hashemi, and H. Vikalo, “Communication-efficient variance-reduced decentralized stochastic optimization over time-varying directed graphs,” IEEE Transactions on Automatic Control, vol. 67, no. 12, pp. 6583–6594, 2021.
- Y. Chen, A. Hashemi, and H. Vikalo, “Decentralized optimization on time-varying directed graphs under communication constraints,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3670–3674, IEEE, 2021.
- Y. Li, X. Wei, Y. Li, Z. Dong, and M. Shahidehpour, “Detection of false data injection attacks in smart grid: A secure federated deep learning approach,” IEEE Transactions on Smart Grid, vol. 13, no. 6, pp. 4862–4872, 2022.
- R. Carli, F. Fagnani, P. Frasca, T. Taylor, and S. Zampieri, “Average consensus on networks with transmission noise or quantization,” in European Control Conference, pp. 1852–1857, IEEE, 2007.
- T. Qin, S. R. Etesami, and C. A. Uribe, “Communication-efficient decentralized local SGD over undirected networks,” in IEEE Conference on Decision and Control, pp. 3361–3366, IEEE, 2021.
- V. P. Chellapandi, A. Upadhyay, A. Hashemi, and S. H. Żak, “On the convergence of decentralized federated learning under imperfect information sharing,” IEEE Control Systems Letters, 2023.
- A. Reisizadeh, A. Mokhtari, H. Hassani, and R. Pedarsani, “An exact quantized decentralized gradient descent algorithm,” IEEE Transactions on Signal Processing, vol. 67, no. 19, pp. 4934–4947, 2019.
- M. M. Vasconcelos, T. T. Doan, and U. Mitra, “Improved convergence rate for a distributed two-time-scale gradient method under random quantization,” in 2021 60th IEEE Conference on Decision and Control (CDC), pp. 3117–3122, IEEE, 2021.
- H. Reisizadeh, B. Touri, and S. Mohajer, “Distributed optimization over time-varying graphs with imperfect sharing of information,” IEEE Transactions on Automatic Control, 2022.
- H. Reisizadeh, A. Gokhale, B. Touri, and S. Mohajer, “Almost sure convergence of distributed optimization with imperfect information sharing,” arXiv preprint arXiv:2210.05897, 2022.
- K. Srivastava and A. Nedic, “Distributed asynchronous constrained stochastic optimization,” IEEE journal of selected topics in signal processing, vol. 5, no. 4, pp. 772–790, 2011.
- A. Koloskova, S. Stich, and M. Jaggi, “Decentralized stochastic optimization and gossip algorithms with compressed communication,” in International Conference on Machine Learning, pp. 3478–3487, 2019.
- X. Wei and C. Shen, “Federated learning over noisy channels: Convergence analysis and design examples,” IEEE Transactions on Cognitive Communications and Networking, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.