FedQV: Leveraging Quadratic Voting in Federated Learning
Abstract: Federated Learning (FL) permits different parties to collaboratively train a global model without disclosing their respective local labels. A crucial step of FL, that of aggregating local models to produce the global one, shares many similarities with public decision-making, and elections in particular. In that context, a major weakness of FL, namely its vulnerability to poisoning attacks, can be interpreted as a consequence of the one person one vote (henceforth 1p1v) principle underpinning most contemporary aggregation rules. In this paper, we propose FedQV, a novel aggregation algorithm built upon the quadratic voting scheme, recently proposed as a better alternative to 1p1v-based elections. Our theoretical analysis establishes that FedQV is a truthful mechanism in which bidding according to one's true valuation is a dominant strategy that achieves a convergence rate that matches those of state-of-the-art methods. Furthermore, our empirical analysis using multiple real-world datasets validates the superior performance of FedQV against poisoning attacks. It also shows that combining FedQV with unequal voting ``budgets'' according to a reputation score increases its performance benefits even further. Finally, we show that FedQV can be easily combined with Byzantine-robust privacy-preserving mechanisms to enhance its robustness against both poisoning and privacy attacks.
- Distributed systems: methods and tools for specification. An advanced course. Springer-Verlag, 1985.
- Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 13(5):1333–1345, 2017.
- Blind backdoors in deep learning models. In 30th USENIX Security Symposium (USENIX Security 21), pp. 1505–1521, 2021.
- How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pp. 2938–2948. PMLR, 2020.
- Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems, 30, 2017.
- Algorithmic game theory. Introduction to Mechanism Design, Cambridge University Press, New York, USA, 2007.
- Practical secure aggregation for federated learning on user-held data. the NIPS 2016 workshop on Private Multi-Party Machine Learning, 2016.
- Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning, 8(3-4):231–357, 2015.
- Leaf: A benchmark for federated settings. In 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
- Fltrust: Byzantine-robust federated learning via trust bootstrapping. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2021.
- Fedrecover: Recovering from poisoning attacks in federated learning using historical information. In Proceedings of the IEEE Symposium on Security and Privacy 2023, 2023.
- Storable votes and quadratic voting. an experiment on four california propositions. Technical report, National Bureau of Economic Research, 2019.
- Quadratic voting in finite populations. Available at SSRN 2571026, 2019.
- Draco: Byzantine-resilient distributed training via redundant gradients. In International Conference on Machine Learning, pp. 903–912. PMLR, 2018.
- Securing federated sensitive topic classification against poisoning attacks. In Proceedings of Network and Distributed System Security Symposium (NDSS), 2022.
- Robust anomaly detection and backdoor attack detection via differential privacy. In International Conference on Learning Representations(ICLR), 2020.
- Cynthia Dwork. Differential privacy. In Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II 33, pp. 1–12. Springer, 2006.
- Local model poisoning attacks to byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622, 2020.
- Hector Garcia-Molina. Elections in a distributed computing system. IEEE transactions on Computers, 31(01):48–59, 1982.
- Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, 33:16937–16947, 2020.
- Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019.
- Siren: Byzantine-robust federated learning via proactive alarming. In Proceedings of the ACM Symposium on Cloud Computing, pp. 47–60, 2021.
- Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
- Byzshield: An efficient and robust system for distributed training. Proceedings of Machine Learning and Systems, 3:812–828, 2021.
- Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
- Quadratic voting: How mechanism design can radicalize democracy. In AEA Papers and Proceedings, volume 108, pp. 33–37, 2018.
- Quadratic voting. Available at SSRN, 2016.
- Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
- On the convergence of fedavg on non-iid data. In International Conference on Learning Representations(ICLR), 2020.
- Fedvoting: A cross-silo boosting tree construction method for privacy-preserving long-term human mobility prediction. Sensors, 21(24):8282, 2021.
- Privacy-preserving byzantine-robust federated learning. Computer Standards & Interfaces, 80:103561, 2022.
- Learning differentially private recurrent language models. In International Conference on Learning Representations(ICLR), 2018.
- Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP), pp. 691–706. IEEE, 2019.
- Automatic differentiation in pytorch. In NIPS 2017 Workshop on Autodiff, 2017.
- Voting squared: Quadratic voting in democratic politics. Vand. L. Rev., 68:441, 2015.
- Quadratic voting in the wild: real people, real votes. Public Choice, 172(1):283–303, 2017.
- Detox: A redundancy-based framework for faster and more robust gradient aggregation. Advances in Neural Information Processing Systems, 32, 2019.
- Federated learning for emoji prediction in a mobile keyboard. arXiv preprint arXiv:1906.04329, 2019.
- Giovanni Sartori. The theory of democracy revisited, volume 2. NJ, 1987.
- Sparse binary compression: Towards distributed deep learning with minimal communication. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2019.
- Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In Network and Distributed Systems Security (NDSS) Symposium, 2021.
- Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 1354–1371. IEEE, 2022.
- Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168–2181, 2020.
- Election coding for distributed learning: Protecting signsgd against byzantine attacks. Advances in Neural Information Processing Systems, 33:14615–14625, 2020.
- Efficient collective decision-making, marginal cost pricing, and quadratic voting. Public Choice, 172(1):45–73, 2017.
- E Glen Weyl. The robustness of quadratic voting. Public choice, 172(1):75–107, 2017.
- Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
- Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning, pp. 6893–6901. PMLR, 2019.
- Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. PMLR, 2018.
- Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,01, pp. 5693–5700, 2019.
- Federated learning via plurality vote. IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2022. doi: 10.1109/TNNLS.2022.3225715.
- Neurotoxin: Durable backdoors in federated learning. In International Conference on Machine Learning (ICML), pp. 26429–26446. PMLR, 2022.
- Fedinv: Byzantine-robust federated learning by inversing local model updates. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 36(8), 9171–9179, 2022.
- Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.