Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QI-DPFL: Quality-Aware and Incentive-Boosted Federated Learning with Differential Privacy (2404.08261v1)

Published 12 Apr 2024 in cs.GT

Abstract: Federated Learning (FL) has increasingly been recognized as an innovative and secure distributed model training paradigm, aiming to coordinate multiple edge clients to collaboratively train a shared model without uploading their private datasets. The challenge of encouraging mobile edge devices to participate zealously in FL model training procedures, while mitigating the privacy leakage risks during wireless transmission, remains comparatively unexplored so far. In this paper, we propose a novel approach, named QI-DPFL (Quality-Aware and Incentive-Boosted Federated Learning with Differential Privacy), to address the aforementioned intractable issue. To select clients with high-quality datasets, we first propose a quality-aware client selection mechanism based on the Earth Mover's Distance (EMD) metric. Furthermore, to attract high-quality data contributors, we design an incentive-boosted mechanism that constructs the interactions between the central server and the selected clients as a two-stage Stackelberg game, where the central server designs the time-dependent reward to minimize its cost by considering the trade-off between accuracy loss and total reward allocated, and each selected client decides the privacy budget to maximize its utility. The Nash Equilibrium of the Stackelberg game is derived to find the optimal solution in each global iteration. The extensive experimental results on different real-world datasets demonstrate the effectiveness of our proposed FL framework, by realizing the goal of privacy protection and incentive compatibility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. W. G. Voss, “European union data privacy law reform: General data protection regulation, privacy shield, and the right to delisting,” The Business Lawyer, vol. 72, no. 1, pp. 221–234, 2016.
  2. R. Zhou, J. Pang, Z. Wang, J. C. Lui, and Z. Li, “A truthful procurement auction for incentivizing heterogeneous clients in federated learning,” in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS).   IEEE, 2021, pp. 183–193.
  3. W. He, H. Yao, T. Mai, F. Wang, and M. Guizani, “Three-stage stackelberg game enabled clustered federated learning in heterogeneous uav swarms,” IEEE Transactions on Vehicular Technology, 2023.
  4. J. S. Ng, W. Y. B. Lim, H.-N. Dai, Z. Xiong, J. Huang, D. Niyato, X.-S. Hua, C. Leung, and C. Miao, “Joint auction-coalition formation framework for communication-efficient federated learning in uav-enabled internet of vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 4, pp. 2326–2344, 2020.
  5. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
  6. C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
  7. M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in Theory of Cryptography: 14th International Conference, TCC 2016-B, Beijing, China, October 31-November 3, 2016, Proceedings, Part I.   Springer, 2016, pp. 635–658.
  8. I. Mironov, “Rényi differential privacy,” in 2017 IEEE 30th computer security foundations symposium (CSF).   IEEE, 2017, pp. 263–275.
  9. P. Sun, X. Chen, G. Liao, and J. Huang, “A profit-maximizing model marketplace with differentially private federated learning,” in IEEE INFOCOM 2022-IEEE Conference on Computer Communications.   IEEE, 2022, pp. 1439–1448.
  10. X. Wu, Y. Zhang, M. Shi, P. Li, R. Li, and N. N. Xiong, “An adaptive federated learning scheme with differential privacy preserving,” Future Generation Computer Systems, vol. 127, pp. 362–372, 2022.
  11. J. P. Near and C. Abuah, “Programming differential privacy,” URL: https://uvm, 2021.
  12. J. Kang, Z. Xiong, D. Niyato, H. Yu, Y.-C. Liang, and D. I. Kim, “Incentive design for efficient federated learning in mobile networks: A contract theory approach,” in 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS).   IEEE, 2019, pp. 1–5.
  13. M. Wu, D. Ye, J. Ding, Y. Guo, R. Yu, and M. Pan, “Incentivizing differentially private federated learning: A multidimensional contract approach,” IEEE Internet of Things Journal, vol. 8, no. 13, pp. 10 639–10 651, 2021.
  14. Z. Yi, Y. Jiao, W. Dai, G. Li, H. Wang, and Y. Xu, “A stackelberg incentive mechanism for wireless federated learning with differential privacy,” IEEE Wireless Communications Letters, vol. 11, no. 9, pp. 1805–1809, 2022.
  15. Y. Xu, M. Xiao, J. Wu, H. Tan, and G. Gao, “A personalized privacy preserving mechanism for crowdsourced federated learning,” IEEE Transactions on Mobile Computing, 2023.
  16. N. Ding, Z. Fang, and J. Huang, “Optimal contract design for efficient federated learning with multi-dimensional private information,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 1, pp. 186–200, 2020.
  17. H. Wu and P. Wang, “Fast-convergent federated learning with adaptive weighting,” IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 4, pp. 1078–1088, 2021.
  18. Y. Sun, H. Fernando, T. Chen, and S. Shahrampour, “On the stability analysis of open federated learning systems,” in 2023 American Control Conference (ACC).   IEEE, 2023, pp. 867–872.
  19. R. Hu and Y. Gong, “Trading data for learning: Incentive mechanism for on-device federated learning,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.   IEEE, 2020, pp. 1–6.
  20. Y. Jiao, P. Wang, D. Niyato, B. Lin, and D. I. Kim, “Toward an automated auction framework for wireless federated learning services market,” IEEE Transactions on Mobile Computing, vol. 20, no. 10, pp. 3034–3048, 2020.
  21. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.   PMLR, 2017, pp. 1273–1282.
  22. A. Rakhlin, O. Shamir, and K. Sridharan, “Making gradient descent optimal for strongly convex stochastic optimization,” arXiv preprint arXiv:1109.5647, 2011.
  23. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  24. A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” Advances in neural information processing systems, 2009.
  25. G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN).   IEEE, 2017, pp. 2921–2926.
  26. T.-M. H. Hsu, H. Qi, and M. Brown, “Measuring the effects of non-identical data distribution for federated visual classification,” arXiv preprint arXiv:1909.06335, 2019.
  27. X. Kang, G. Yu, J. Wang, W. Guo, C. Domeniconi, and J. Zhang, “Incentive-boosted federated crowdsourcing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, 2023, pp. 6021–6029.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Wenhao Yuan (8 papers)
  2. Xuehe Wang (11 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com