Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating federated learning contribution allocation instability through randomized aggregation (2405.08044v2)

Published 13 May 2024 in cs.LG and cs.AI

Abstract: Federated learning (FL) is a collaborative and privacy-preserving Machine Learning paradigm, allowing the development of robust models without the need to centralise sensitive data. A critical challenge in FL lies in fairly and accurately allocating contributions from diverse participants. Inaccurate allocation can undermine trust, lead to unfair compensation, and thus participants may lack the incentive to join or actively contribute to the federation. Various remuneration strategies have been proposed to date, including auction-based approaches and Shapley-value based methods, the latter offering a means to quantify the contribution of each participant. However, little to no work has studied the stability of these contribution evaluation methods. In this paper, we focus on calculating contributions using gradient-based model reconstruction techniques with Shapley values. We first show that baseline Shapley values do not accurately reflect clients' contributions, leading to unstable reward allocations amongst participants in a cross-silo federation. We then introduce \textsc{FedRandom}, a new method that mitigates these shortcomings with additional data samplings, and show its efficacy at increasing the stability of contribution evaluation in federated learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Flower: A friendly federated learning research framework. arXiv preprint arXiv:2007.14390, 2020.
  2. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30, 2017.
  3. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
  4. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  5. Data shapley: Equitable valuation of data for machine learning. In International conference on machine learning, pages 2242–2251. PMLR, 2019.
  6. Measuring the effects of non-identical data distribution for federated visual classification, 2019.
  7. Fastshap: Real-time shapley value estimation, 2022.
  8. Towards efficient data valuation based on the shapley value. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1167–1176. PMLR, 2019.
  9. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6):305–311, 2020.
  10. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.
  11. Trustworthy ai: From principles to practices, 2022a.
  12. Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 965–978. IEEE, 2022b.
  13. Advances, challenges and opportunities in creating data for trustworthy ai. Nature Machine Intelligence, 4(8):669–677, 2022.
  14. Communication-efficient learning of deep networks from decentralized data, 2017.
  15. Cooperative game theory and its application to natural, environmental, and water resource issues: 3. application to water resources. Application to Water Resources (November 2006). World Bank Policy Research Working Paper, (4074), 2006.
  16. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
  17. Adaptive federated optimization, 2021.
  18. Rewarding high-quality data via influence functions. arXiv preprint arXiv:1908.11598, 2019.
  19. The future of digital health with federated learning. NPJ digital medicine, 3(1):119, 2020.
  20. Motivating workers in federated learning: A stackelberg game perspective. IEEE Networking Letters, 2(1):23–27, 2019.
  21. Lloyd S Shapley. Notes on the n-person game—ii: The value of an n-person game. 1951.
  22. Profit allocation for federated learning. In 2019 IEEE International Conference on Big Data (Big Data), pages 2577–2586. IEEE, 2019.
  23. Incentive mechanisms for federated learning: From economic and game theoretic perspective, 2021.
  24. Measure contribution of participants in federated learning. In 2019 IEEE international conference on big data (Big Data), pages 2597–2604. IEEE, 2019.
  25. A principled approach to data valuation for federated learning, 2020.
  26. Byzantine-robust distributed learning: Towards optimal statistical rates, 2021.
  27. Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec. In 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, November 2020. doi: 10.1109/icdcs47774.2020.00094.
  28. Hierarchically fair federated learning. arXiv preprint arXiv:2004.10386, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Arno Geimer (3 papers)
  2. Beltran Fiz (1 paper)
  3. Radu State (44 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com