Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CrypTFlow: Secure TensorFlow Inference (1909.07814v2)

Published 16 Sep 2019 in cs.CR, cs.LG, and cs.PL

Abstract: We present CrypTFlow, a first of its kind system that converts TensorFlow inference code into Secure Multi-party Computation (MPC) protocols at the push of a button. To do this, we build three components. Our first component, Athos, is an end-to-end compiler from TensorFlow to a variety of semi-honest MPC protocols. The second component, Porthos, is an improved semi-honest 3-party protocol that provides significant speedups for TensorFlow like applications. Finally, to provide malicious secure MPC protocols, our third component, Aramis, is a novel technique that uses hardware with integrity guarantees to convert any semi-honest MPC protocol into an MPC protocol that provides malicious security. The malicious security of the protocols output by Aramis relies on integrity of the hardware and semi-honest security of MPC. Moreover, our system matches the inference accuracy of plaintext TensorFlow. We experimentally demonstrate the power of our system by showing the secure inference of real-world neural networks such as ResNet50 and DenseNet121 over the ImageNet dataset with running times of about 30 seconds for semi-honest security and under two minutes for malicious security. Prior work in the area of secure inference has been limited to semi-honest security of small networks over tiny datasets such as MNIST or CIFAR. Even on MNIST/CIFAR, CrypTFlow outperforms prior work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nishant Kumar (35 papers)
  2. Mayank Rathee (4 papers)
  3. Nishanth Chandran (13 papers)
  4. Divya Gupta (13 papers)
  5. Aseem Rastogi (18 papers)
  6. Rahul Sharma (88 papers)
Citations (223)

Summary

  • The paper introduces CrypTFlow as an automated system that converts TensorFlow inference code into secure MPC protocols while preserving accuracy.
  • The paper details three components—Athos, Porthos, and Aramis—that ensure compatibility, efficiency, and malicious security in secure neural network inference.
  • The evaluation on ImageNet-scale datasets demonstrates CrypTFlow's scalability with secure inference executed in roughly 30 seconds for convolutional tasks.

An Overview of CrypTFlow: Secure TensorFlow Inference

The paper introduces CrypTFlow, a pioneering system designed to convert TensorFlow inference code into Secure Multi-party Computation (MPC) protocols in an automated manner. The system is relevant in scenarios where secure computation of machine learning models is required, allowing multiple parties to jointly compute a function over their combined confidential data without revealing the underlying inputs. It comprises three major components: Athos, Porthos, and Aramis, each contributing to achieving secure, accurate, and efficient TensorFlow inference.

Core Components

  1. Athos: This component is an end-to-end compiler that transitions TensorFlow code into MPC protocols. It preserves the accuracy of the inference by leveraging fixed-point arithmetic to facilitate compatibility with MPC protocols. Athos successfully automates the conversion process traditionally done manually, preserving the accuracy of the original TensorFlow models while avoiding the caveats of floating-point arithmetic in secure settings.
  2. Porthos: As a semi-honest three-party protocol, Porthos enhances computational efficiency by reducing the communication overhead significantly. It does so by optimizing protocols for convolution and non-linear layers, common in neural networks. When working with image recognition tasks like those from the ImageNet dataset, Porthos showcases execution times of approximately 30 seconds, marking a substantial improvement over existing methodologies.
  3. Aramis: Aramis contributes to CrypTFlow by employing hardware with integrity guarantees to achieve malicious security, transforming any semi-honest secure MPC protocol into one secure against malicious adversaries. This approach relies on hardware integrity for correctness, allowing the assurance of computation integrity without confidentiality guarantees within the hardware itself.

Experimental Results

The empirical evaluation of CrypTFlow positions it as highly competitive compared to traditional methods. Using ImageNet-scale datasets, CrypTFlow performs secure inference efficiently, maintaining accuracy parity with the original TensorFlow implementations. Even at the scale of large and complex networks, CrypTFlow maintains feasible execution times and communication costs.

Critically, the performance gains in semi-honest and malicious settings highlight the system’s capability to handle substantial neural network inference tasks, unachievable by previous works that focused mainly on smaller models and datasets, such as MNIST or CIFAR-10. CrypTFlow exhibits linear scalability with model size, preserving its efficiency and making it applicable for real-world, large-scale neural network deployments.

Practical and Theoretical Implications

CrypTFlow significantly lowers the barrier to deploying secure neural network inference in practical applications by automating the translation of TensorFlow models to MPC-friendly formats. It paves the way for practitioners in the field of machine learning to leverage state-of-the-art cryptographic protocols without needing expertise in complex secure computation domains.

Theoretically, CrypTFlow challenges the limits of MPC applications in machine learning, integrating software engineering principles with cryptographic science to facilitate secure, efficient, and accurate inference at scale. It presents a modular framework where emerging MPC models could be easily integrated, broadening the scope of MPC applications across various computational frameworks beyond TensorFlow.

Future Directions

Future advancements could focus on extending CrypTFlow’s capabilities to support secure training of models, not just inference, thereby creating a comprehensive solution for confidentiality-preserving machine learning. Additionally, as MPC protocols evolve and newer cryptographic techniques are introduced, CrypTFlow’s flexible design could accommodate these innovations, pushing forward the performance and applicability envelope for secure computations.

In conclusion, CrypTFlow represents a substantial advancement in the practical application of cryptographic protocols within machine learning, maintaining the delicate balance between secure computation, accuracy, and efficiency. It marks a notable contribution to the field, poised to bolster security guarantees in various machine learning applications.