Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient privacy-preserving inference for convolutional neural networks (2110.08321v2)

Published 15 Oct 2021 in cs.LG and cs.CR

Abstract: The processing of sensitive user data using deep learning models is an area that has gained recent traction. Existing work has leveraged homomorphic encryption (HE) schemes to enable computation on encrypted data. An early work was CryptoNets, which takes 250 seconds for one MNIST inference. The main limitation of such approaches is that of the expensive FFT-like operations required to perform operations on HE-encrypted ciphertext. Others have proposed the use of model pruning and efficient data representations to reduce the number of HE operations required. We focus on improving upon existing work by proposing changes to the representations of intermediate tensors during CNN inference. We construct and evaluate private CNNs on the MNIST and CIFAR-10 datasets, and achieve over a two-fold reduction in the number of operations used for inferences of the CryptoNets architecture.

Summary

We haven't generated a summary for this paper yet.