Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning (2401.05562v1)
Abstract: Federated learning (FL) enables multiple participants to train a global machine learning model without sharing their private training data. Peer-to-peer (P2P) FL advances existing centralized FL paradigms by eliminating the server that aggregates local models from participants and then updates the global model. However, P2P FL is vulnerable to (i) honest-but-curious participants whose objective is to infer private training data of other participants, and (ii) Byzantine participants who can transmit arbitrarily manipulated local models to corrupt the learning process. P2P FL schemes that simultaneously guarantee Byzantine resilience and preserve privacy have been less studied. In this paper, we develop Brave, a protocol that ensures Byzantine Resilience And privacy-preserving property for P2P FL in the presence of both types of adversaries. We show that Brave preserves privacy by establishing that any honest-but-curious adversary cannot infer other participants' private data by observing their models. We further prove that Brave is Byzantine-resilient, which guarantees that all benign participants converge to an identical model that deviates from a global model trained without Byzantine adversaries by a bounded distance. We evaluate Brave against three state-of-the-art adversaries on a P2P FL for image classification tasks on benchmark datasets CIFAR10 and MNIST. Our results show that the global model learned with Brave in the presence of adversaries achieves comparable classification accuracy to a global model trained in the absence of any adversary.
- Differential privacy has disparate impact on model accuracy. Advances in Neural Information Processing Systems, 32.
- Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems, 30.
- Practical secure aggregation for privacy-preserving machine learning. In ACM SIGSAC Conference on Computer and Communications Security, 1175–1191.
- Practical Byzantine Fault Tolerance. In Third Symposium on Operating Systems Design and Implementation, OSDI ’99, 173–186. USA: USENIX Association. ISBN 1880446391.
- A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee. arXiv preprint arXiv:2108.00365.
- BDFL: a Byzantine-fault-tolerance decentralized federated learning method for autonomous vehicle. IEEE Transactions on Vehicular Technology, 70(9): 8639–8652.
- Dwork, C. 2008. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation, 1–19. Springer.
- Local model poisoning attacks to Byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), 1605–1622.
- DeFL: Decentralized Weight Aggregation for Cross-silo Federated Learning. arXiv preprint arXiv:2208.00848.
- Secure Byzantine-robust machine learning. arXiv preprint arXiv:2006.04747.
- Deep models under the GAN: information leakage from collaborative deep learning. In ACM SIGSAC Conference on Computer and Communications Security, 603–618.
- The past, evolving present, and future of the discrete logarithm. Open Problems in Mathematics and Computational Science, 5–36.
- Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
- Zyzzyva: Speculative Byzantine fault tolerance. In Twenty-first ACM SIGOPS Symposium on Operating Systems Principles, 45–58.
- Krizhevsky, A. 2009. Learning multiple layers of features from tiny images. Ph.D. thesis, University of Toronto.
- Peer-to-peer federated learning on graphs. arXiv preprint arXiv:1901.11173.
- The Byzantine generals problem, 203–226. Association for Computing Machinery. ISBN 9781450372701.
- MNIST handwritten digit database. Available: http://yann.lecun.com/exdb/mnist.
- Efficient group signature scheme based on the discrete logarithm. IEE Proceedings-Computers and Digital Techniques, 145(1): 15–18.
- RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. In the AAAI Conference on Artificial Intelligence, volume 33, 1544–1551.
- A Blockchain-Based Decentralized Federated Learning Framework with Committee Consensus. IEEE Network, 35(1): 234–241.
- Scalable Robust Federated Learning with Provable Security Guarantees.
- Differentially Private Byzantine-robust Federated Learning. IEEE Transactions on Parallel and Distributed Systems.
- Privacy-preserving Byzantine-robust federated learning. Computer Standards & Interfaces, 80: 103561.
- McCurley, K. S. 1990. The discrete logarithm problem. In Proc. of Symp. in Applied Math, volume 42, 49–74. USA.
- Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 1273–1282. PMLR.
- SoK: Secure Aggregation based on cryptographic schemes for Federated Learning. In Privacy Enhancing Technologies Symposium, volume 1.
- SMPAI: Secure multi-party computation for federated learning. In the NeurIPS 2019 Workshop on Robust AI in Financial Services.
- Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In IEEE Symposium on Security and Privacy (SP), 739–753. IEEE.
- Pedersen, T. P. 1992. Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing. In Advances in Cryptology — CRYPTO ’91, volume 576, 129–140. Springer Berlin Heidelberg. ISBN 978-3-540-55188-1.
- Baffle: Blockchain based aggregator free federated learning. In 2020 IEEE International Conference on Blockchain (Blockchain), 72–81. IEEE.
- Braintorrent: A peer-to-peer environment for decentralized federated learning. arXiv preprint arXiv:1905.06731.
- Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks. IEEE Internet of Things Journal, 7(5): 4641–4654.
- Biscotti: A Blockchain System for Private and Secure Federated Learning. IEEE Transactions on Parallel and Distributed Systems, 32(7): 1513–1525.
- Shi, E. 2020. Foundations of Distributed Consensus and Blockchains. URL: http://elaineshi. com/docs/blockchain-book. pdf.
- Data poisoning attacks against federated learning systems. In European Symposium on Research in Computer Security, 480–501. Springer.
- A hybrid approach to privacy-preserving federated learning. In 12th ACM Workshop on Artificial Intelligence and Security, 1–11.
- Secure Byzantine-Robust Distributed Learning via Clustering. arXiv preprint arXiv:2110.02940.
- Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15: 3454–3469.
- Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning, 6893–6901. PMLR.
- Yao, A. C. 1982. Protocols for secure computations. In 23rd Annual Symposium on Foundations of Computer Science, 160–164.
- Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, 5650–5659. PMLR.
- Hotstuff: BFT consensus with linearity and responsiveness. In ACM Symposium on Principles of Distributed Computing, 347–356.
- GAN enhanced membership inference: A passive local attack in federated learning. In IEEE International Conference on Communications (ICC), 1–6. IEEE.