Papers
Topics
Authors
Recent
Search
2000 character limit reached

FedQV: Leveraging Quadratic Voting in Federated Learning

Published 2 Jan 2024 in cs.CR and cs.LG | (2401.01168v2)

Abstract: Federated Learning (FL) permits different parties to collaboratively train a global model without disclosing their respective local labels. A crucial step of FL, that of aggregating local models to produce the global one, shares many similarities with public decision-making, and elections in particular. In that context, a major weakness of FL, namely its vulnerability to poisoning attacks, can be interpreted as a consequence of the one person one vote (henceforth 1p1v) principle underpinning most contemporary aggregation rules. In this paper, we propose FedQV, a novel aggregation algorithm built upon the quadratic voting scheme, recently proposed as a better alternative to 1p1v-based elections. Our theoretical analysis establishes that FedQV is a truthful mechanism in which bidding according to one's true valuation is a dominant strategy that achieves a convergence rate that matches those of state-of-the-art methods. Furthermore, our empirical analysis using multiple real-world datasets validates the superior performance of FedQV against poisoning attacks. It also shows that combining FedQV with unequal voting ``budgets'' according to a reputation score increases its performance benefits even further. Finally, we show that FedQV can be easily combined with Byzantine-robust privacy-preserving mechanisms to enhance its robustness against both poisoning and privacy attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Distributed systems: methods and tools for specification. An advanced course. Springer-Verlag, 1985.
  2. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5):1333–1345, 2017.
  3. Blind backdoors in deep learning models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 1505–1521, 2021.
  4. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pp.  2938–2948. PMLR, 2020.
  5. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems, 30, 2017.
  6. Algorithmic game theory. Introduction to Mechanism Design, Cambridge University Press, New York, USA, 2007.
  7. Practical secure aggregation for federated learning on user-held data. the NIPS 2016 workshop on Private Multi-Party Machine Learning, 2016.
  8. Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015.
  9. Leaf: A benchmark for federated settings. In 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
  10. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2021.
  11. Fedrecover: Recovering from poisoning attacks in federated learning using historical information. In Proceedings of the IEEE Symposium on Security and Privacy 2023, 2023.
  12. Storable votes and quadratic voting. an experiment on four california propositions. Technical report, National Bureau of Economic Research, 2019.
  13. Quadratic voting in finite populations. Available at SSRN 2571026, 2019.
  14. Draco: Byzantine-resilient distributed training via redundant gradients. In International Conference on Machine Learning, pp. 903–912. PMLR, 2018.
  15. Securing federated sensitive topic classification against poisoning attacks. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2022.
  16. Robust anomaly detection and backdoor attack detection via differential privacy. In International Conference on Learning Representations(ICLR), 2020.
  17. Cynthia Dwork. Differential privacy. In Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II 33, pp.  1–12. Springer, 2006.
  18. Local model poisoning attacks to byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622, 2020.
  19. Hector Garcia-Molina. Elections in a distributed computing system. IEEE transactions on Computers, 31(01):48–59, 1982.
  20. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, 33:16937–16947, 2020.
  21. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019.
  22. Siren: Byzantine-robust federated learning via proactive alarming. In Proceedings of the ACM Symposium on Cloud Computing, pp. 47–60, 2021.
  23. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018.
  24. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  25. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
  26. Byzshield: An efficient and robust system for distributed training. Proceedings of Machine Learning and Systems, 3:812–828, 2021.
  27. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  28. Quadratic voting: How mechanism design can radicalize democracy. In AEA Papers and Proceedings, volume 108, pp.  33–37, 2018.
  29. Quadratic voting. Available at SSRN, 2016.
  30. Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
  31. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations(ICLR), 2020.
  32. Fedvoting: A cross-silo boosting tree construction method for privacy-preserving long-term human mobility prediction. Sensors, 21(24):8282, 2021.
  33. Privacy-preserving byzantine-robust federated learning. Computer Standards & Interfaces, 80:103561, 2022.
  34. Learning differentially private recurrent language models. In International Conference on Learning Representations(ICLR), 2018.
  35. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP), pp. 691–706. IEEE, 2019.
  36. Automatic differentiation in pytorch. In NIPS 2017 Workshop on Autodiff, 2017.
  37. Voting squared: Quadratic voting in democratic politics. Vand. L. Rev., 68:441, 2015.
  38. Quadratic voting in the wild: real people, real votes. Public Choice, 172(1):283–303, 2017.
  39. Detox: A redundancy-based framework for faster and more robust gradient aggregation. Advances in Neural Information Processing Systems, 32, 2019.
  40. Federated learning for emoji prediction in a mobile keyboard. arXiv preprint arXiv:1906.04329, 2019.
  41. Giovanni Sartori. The theory of democracy revisited, volume 2. NJ, 1987.
  42. Sparse binary compression: Towards distributed deep learning with minimal communication. In 2019 International Joint Conference on Neural Networks (IJCNN), pp.  1–8. IEEE, 2019.
  43. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Network and Distributed Systems Security (NDSS) Symposium, 2021.
  44. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1354–1371. IEEE, 2022.
  45. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168–2181, 2020.
  46. Election coding for distributed learning: Protecting signsgd against byzantine attacks. Advances in Neural Information Processing Systems, 33:14615–14625, 2020.
  47. Efficient collective decision-making, marginal cost pricing, and quadratic voting. Public Choice, 172(1):45–73, 2017.
  48. E Glen Weyl. The robustness of quadratic voting. Public choice, 172(1):75–107, 2017.
  49. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
  50. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning, pp. 6893–6901. PMLR, 2019.
  51. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. PMLR, 2018.
  52. Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,01, pp.  5693–5700, 2019.
  53. Federated learning via plurality vote. IEEE Transactions on Neural Networks and Learning Systems, pp.  1–14, 2022. doi: 10.1109/TNNLS.2022.3225715.
  54. Neurotoxin: Durable backdoors in federated learning. In International Conference on Machine Learning (ICML), pp. 26429–26446. PMLR, 2022.
  55. Fedinv: Byzantine-robust federated learning by inversing local model updates. In Proceedings of the AAAI Conference on Artificial Intelligence, pp.  36(8), 9171–9179, 2022.
  56. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 2 likes about this paper.