Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy-preserving Decentralized Deep Learning with Multiparty Homomorphic Encryption (2207.04604v1)

Published 11 Jul 2022 in cs.CR

Abstract: Decentralized deep learning plays a key role in collaborative model training due to its attractive properties, including tolerating high network latency and less prone to single-point failures. Unfortunately, such a training mode is more vulnerable to data privacy leaks compared to other distributed training frameworks. Existing efforts exclusively use differential privacy as the cornerstone to alleviate the data privacy threat. However, it is still not clear whether differential privacy can provide a satisfactory utility-privacy trade-off for model training, due to its inherent contradictions. To address this problem, we propose D-MHE, the first secure and efficient decentralized training framework with lossless precision. Inspired by the latest developments in the homomorphic encryption technology, we design a multiparty version of Brakerski-Fan-Vercauteren (BFV), one of the most advanced cryptosystems, and use it to implement private gradient updates of users'local models. D-MHE can reduce the communication complexity of general Secure Multiparty Computation (MPC) tasks from quadratic to linear in the number of users, making it very suitable and scalable for large-scale decentralized learning systems. Moreover, D-MHE provides strict semantic security protection even if the majority of users are dishonest with collusion. We conduct extensive experiments on MNIST and CIFAR-10 datasets to demonstrate the superiority of D-MHE in terms of model accuracy, computation and communication cost compared with existing schemes.

Citations (3)

Summary

We haven't generated a summary for this paper yet.