Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 82 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 110 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation (2410.20141v1)

Published 26 Oct 2024 in cs.LG and cs.CR

Abstract: The increasing concern for data privacy has driven the rapid development of federated learning (FL), a privacy-preserving collaborative paradigm. However, the statistical heterogeneity among clients in FL results in inconsistent performance of the server model across various clients. Server model may show favoritism towards certain clients while performing poorly for others, heightening the challenge of fairness. In this paper, we reconsider the inconsistency in client performance distribution and introduce the concept of adversarial multi-armed bandit to optimize the proposed objective with explicit constraints on performance disparities. Practically, we propose a novel multi-armed bandit-based allocation FL algorithm (FedMABA) to mitigate performance unfairness among diverse clients with different data distributions. Extensive experiments, in different Non-I.I.D. scenarios, demonstrate the exceptional performance of FedMABA in enhancing fairness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  2. Y. Guo, X. Tang, and T. Lin, “Fedbr: Improving federated learning on heterogeneous data via local learning bias reduction,” in International Conference on Machine Learning.   PMLR, 2023, pp. 12 034–12 054.
  3. L. Wang, Y. Guo, T. Lin, and X. Tang, “Delta: Diverse client sampling for fasting federated learning,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  4. Y. Guo, X. Tang, and T. Lin, “FedRC: Tackling diverse distribution shifts challenge in federated learning by robust clustering,” in Forty-first International Conference on Machine Learning, 2024. [Online]. Available: https://openreview.net/forum?id=kc4dZYJlJG
  5. L. Wang, Z. Wang, and X. Tang, “Fedeba+: Towards fair and effective federated learning via entropy-based model,” arXiv preprint arXiv:2301.12407, 2023.
  6. H. Wu, X. Tang, Y.-J. A. Zhang, and L. Gao, “Incentive mechanism for federated learning with random client selection,” IEEE Transactions on Network Science and Engineering, 2023.
  7. M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in International Conference on Machine Learning.   PMLR, 2019, pp. 4615–4625.
  8. Z. Hu, K. Shaloudegi, G. Zhang, and Y. Yu, “Fedmgda+: Federated learning meets multi-objective optimization. corr abs/2006.11489 (2020),” arXiv preprint arXiv:2006.11489, 2020.
  9. T. Li, M. Sanjabi, A. Beirami, and V. Smith, “Fair resource allocation in federated learning,” arXiv preprint arXiv:1905.10497, 2019.
  10. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
  11. Z. Wang, X. Fan, J. Qi, C. Wen, C. Wang, and R. Yu, “Federated learning with fair averaging,” arXiv preprint arXiv:2104.14937, 2021.
  12. Y. Mao, Z. Wang, W. Liu, X. Lin, and W. Hu, “Banditmtl: Bandit-based multi-task learning for text classification,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 5506–5516.
  13. J. Yuan and R. Zhang, “Equitable multi-task learning,” arXiv preprint arXiv:2306.09373, 2023.
  14. Z. Zhou, L. Chu, C. Liu, L. Wang, J. Pei, and Y. Zhang, “Towards fair federated learning,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 4100–4101.
  15. A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen, “Robust solutions of optimization problems affected by uncertain probabilities,” Management Science, vol. 59, no. 2, pp. 341–357, 2013.
  16. D. Bertsimas, V. Gupta, and N. Kallus, “Robust sample average approximation,” Mathematical Programming, vol. 171, pp. 217–282, 2018.
  17. J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimization,” Advances in neural information processing systems, vol. 33, pp. 7611–7623, 2020.
  18. T. Li, A. Beirami, M. Sanjabi, and V. Smith, “Tilted empirical risk minimization,” arXiv preprint arXiv:2007.01162, 2020.
  19. G. Zhang, S. Malekmohammadi, X. Chen, and Y. Yu, “Proportional fairness in federated learning,” arXiv preprint arXiv:2202.01666, 2022.
  20. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  21. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.