Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Federated Aggregation with Deep Unfolding Networks (2306.17362v1)

Published 30 Jun 2023 in cs.LG

Abstract: The performance of Federated learning (FL) is negatively affected by device differences and statistical characteristics between participating clients. To address this issue, we introduce a deep unfolding network (DUN)-based technique that learns adaptive weights that unbiasedly ameliorate the adverse impacts of heterogeneity. The proposed method demonstrates impressive accuracy and quality-aware aggregation. Furthermore, it evaluated the best-weighted normalization approach to define less computational power on the aggregation method. The numerical experiments in this study demonstrate the effectiveness of this approach and provide insights into the interpretability of the unbiased weights learned. By incorporating unbiased weights into the model, the proposed approach effectively addresses quality-aware aggregation under the heterogeneity of the participating clients and the FL environment. Codes and details are \href{https://github.com/shanikairoshi/Improved_DUN_basedFL_Aggregation}{here}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  2. J. Wang, Z. Charles, Z. Xu, G. Joshi, H. B. McMahan, M. Al-Shedivat, G. Andrew, S. Avestimehr, K. Daly, D. Data et al., “A field guide to federated optimization,” arXiv preprint arXiv:2107.06917, 2021.
  3. S. R. Pokhrel, “Federated learning meets blockchain at 6g edge: A drone-assisted networking for disaster response,” in Proceedings of the 2nd ACM MobiCom workshop on drone assisted wireless communications for 5G and beyond, 2020, pp. 49–54.
  4. S. R. Pokhrel and J. Choi, “Federated learning with blockchain for autonomous vehicles: Analysis and design challenges,” IEEE Transactions on Communications, vol. 68, no. 8, pp. 4734–4746, 2020.
  5. Z. Zhao and G. Joshi, “A dynamic reweighting strategy for fair federated learning,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2022, pp. 8772–8776.
  6. F. Sattler, K.-R. Müller, and W. Samek, “Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints,” IEEE transactions on neural networks and learning systems, vol. 32, no. 8, pp. 3710–3722, 2020.
  7. Z. Tang, F. Shao, L. Chen, Y. Ye, C. Wu, and J. Xiao, “Optimizing federated learning on non-iid data using local shapley value,” in CAAI International Conference on Artificial Intelligence.   Springer, 2021, pp. 164–175.
  8. P. Xing, S. Lu, L. Wu, and H. Yu, “Big-fed: Bilevel optimization enhanced graph-aided federated learning,” IEEE Transactions on Big Data, 2022.
  9. V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Processing Magazine, vol. 38, no. 2, pp. 18–44, 2021.
  10. A. P. Sabulal and S. Bhashyam, “Joint sparse recovery using deep unfolding with application to massive random access,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2020, pp. 5050–5054.
  11. A. Nakai-Kasai and T. Wadayama, “Deep unfolding-based weighted averaging for federated learning under heterogeneous environments,” arXiv preprint arXiv:2212.12191, 2022.
  12. Z. Liu, Y. Chen, H. Yu, Y. Liu, and L. Cui, “Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1–21, 2022.
  13. S. R. Pandey, L. D. Nguyen, and P. Popovski, “A contribution-based device selection scheme in federated learning,” IEEE Communications Letters, vol. 26, no. 9, pp. 2057–2061, 2022.
  14. X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” arXiv preprint arXiv:2012.13995, 2020.
  15. D. Jhunjhunwala, S. Wang, and G. Joshi, “Fedexp: Speeding up federated averaging via extrapolation,” arXiv preprint arXiv:2301.09604, 2023.
  16. G. Malinovsky, K. Mishchenko, and P. Richtárik, “Server-side stepsizes and sampling without replacement provably help in federated optimization,” arXiv preprint arXiv:2201.11066, 2022.
  17. K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th international conference on international conference on machine learning, 2010, pp. 399–406.
  18. A. Balatsoukas-Stimming and C. Studer, “Deep unfolding for communications systems: A survey and some new directions,” in 2019 IEEE International Workshop on Signal Processing Systems (SiPS).   IEEE, 2019, pp. 266–271.
  19. K. K. Mogilipalepu, S. K. Modukuri, A. Madapu, and S. P. Chepuri, “Federated deep unfolding for sparse recovery,” in 2021 29th European Signal Processing Conference (EUSIPCO).   IEEE, 2021, pp. 1950–1954.
  20. S. Hadou, N. NaderiAlizadeh, and A. Ribeiro, “Stochastic unrolled federated learning,” arXiv preprint arXiv:2305.15371, 2023.

Summary

We haven't generated a summary for this paper yet.