Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
123 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Asynchronous Federated Learning with Incentive Mechanism Based on Contract Theory (2310.06448v1)

Published 10 Oct 2023 in cs.LG and cs.DC

Abstract: To address the challenges posed by the heterogeneity inherent in federated learning (FL) and to attract high-quality clients, various incentive mechanisms have been employed. However, existing incentive mechanisms are typically utilized in conventional synchronous aggregation, resulting in significant straggler issues. In this study, we propose a novel asynchronous FL framework that integrates an incentive mechanism based on contract theory. Within the incentive mechanism, we strive to maximize the utility of the task publisher by adaptively adjusting clients' local model training epochs, taking into account factors such as time delay and test accuracy. In the asynchronous scheme, considering client quality, we devise aggregation weights and an access control algorithm to facilitate asynchronous aggregation. Through experiments conducted on the MNIST dataset, the simulation results demonstrate that the test accuracy achieved by our framework is 3.12% and 5.84% higher than that achieved by FedAvg and FedProx without any attacks, respectively. The framework exhibits a 1.35% accuracy improvement over the ideal Local SGD under attacks. Furthermore, aiming for the same target accuracy, our framework demands notably less computation time than both FedAvg and FedProx.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. Y. Jiao, P. Wang, D. Niyato, B. Lin, and D. I. Kim, “Toward an automated auction framework for wireless federated learning services market,” IEEE Trans. Mob. Comput., vol. 20, no. 10, pp. 3034–3048, 2021.
  2. J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, “Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,” IEEE Internet Things J., vol. 6, no. 6, pp. 700–714, 2019.
  3. C. Xie, O. Koyejo, and I. Gupta, “Asynchronous federated optimization,” ArXiv, vol. abs/1903.03934, 2019.
  4. C.-H. Hu, Z. Chen, and E. G. Larsson, “Scheduling and aggregation design for asynchronous federated learning over wireless networks,” IEEE J. Sel. Areas Commun., vol. 41, no. 4, pp. 874–886, 2023.
  5. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282, 2017.
  6. R. Strausz, “Bolton, p., and dewatripont, m.: Contract theory,” Journal of Economics, vol. 86, pp. 305–308, 2005.
  7. L. Deng, “The mnist database of handwritten digit images for machine learning research,” IEEE Signal Process Mag., vol. 29, no. 6, pp. 141–142, 2012.
  8. T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated Optimization in Heterogeneous Networks,” in Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.