How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels (2310.16652v2)
Abstract: Because of its privacy-preserving capability, federated learning (FL) has attracted significant attention from both academia and industry. However, when being implemented over wireless networks, it is not clear how much communication error can be tolerated by FL. This paper investigates the robustness of FL to the uplink and downlink communication error. Our theoretical analysis reveals that the robustness depends on two critical parameters, namely the number of clients and the numerical range of model parameters. It is also shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication, and this difference is quantified by a proposed formula. The findings and theoretical analyses are further validated by extensive experiments.
- “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics. PMLR, 2017, pp. 1273–1282.
- “Robust federated learning with noisy communication,” IEEE Transactions on Communications, vol. 68, no. 6, pp. 3452–3464, 2020.
- “FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 2021–2031.
- “Neural network quantization in federated learning at the edge,” Information Sciences, vol. 575, pp. 417–436, 2021.
- “Federated learning with quantized global model updates,” arXiv preprint arXiv:2006.10672, 2020.
- “Adaptive gradient sparsification for efficient federated learning: An online learning approach,” in 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2020, pp. 300–310.
- “Time-correlated sparsification for communication-efficient federated learning,” in 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021, pp. 461–466.
- “GGS: General gradient sparsification for federated learning in edge computing,” in 2020 IEEE International Conference on Communications. IEEE, 2020, pp. 1–7.
- “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016.
- “Low rank communication for federated learning,” in Database Systems for Advanced Applications. DASFAA 2020 International Workshops: BDMS, SeCoP, BDQM, GDMA, and AIDE, Jeju, South Korea, September 24–27, 2020, Proceedings 25. Springer, 2020, pp. 1–16.
- “Federated learning via over-the-air computation,” IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 2022–2035, 2020.
- “Fast convergence algorithm for analog federated learning,” in 2021-IEEE International Conference on Communications. IEEE, 2021, pp. 1–6.
- “Over-the-air federated learning from heterogeneous data,” IEEE Transactions on Signal Processing, vol. 69, pp. 3796–3811, 2021.
- “Convergence of federated learning over a noisy downlink,” IEEE Transactions on Wireless Communications, vol. 21, no. 3, pp. 1422–1437, 2021.
- R. Jiang and S. Zhou, “Cluster-based cooperative digital over-the-air aggregation for wireless federated edge learning,” in 2020 IEEE/CIC International Conference on Communications in China (ICCC). IEEE, 2020, pp. 887–892.
- M. Hasan and B. Ray, “Tolerance of deep neural network against the bit error rate of nand flash memory,” in 2019 IEEE International Reliability Physics Symposium (IRPS). IEEE, 2019, pp. 1–4.
- “Federated optimization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020.
- “Distributed mean estimation with limited communication,” in International Conference on Machine Learning. PMLR, 2017, pp. 3329–3337.
- Y. LeCun, “The MNIST database of handwritten digits,” [Online]. Available: http://yann. lecun. com/exdb/mnist/, 1998.
- “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
- “Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.