Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CrypTFlow2: Practical 2-Party Secure Inference (2010.06457v1)

Published 13 Oct 2020 in cs.CR and cs.LG

Abstract: We present CrypTFlow2, a cryptographic framework for secure inference over realistic Deep Neural Networks (DNNs) using secure 2-party computation. CrypTFlow2 protocols are both correct -- i.e., their outputs are bitwise equivalent to the cleartext execution -- and efficient -- they outperform the state-of-the-art protocols in both latency and scale. At the core of CrypTFlow2, we have new 2PC protocols for secure comparison and division, designed carefully to balance round and communication complexity for secure inference tasks. Using CrypTFlow2, we present the first secure inference over ImageNet-scale DNNs like ResNet50 and DenseNet121. These DNNs are at least an order of magnitude larger than those considered in the prior work of 2-party DNN inference. Even on the benchmarks considered by prior work, CrypTFlow2 requires an order of magnitude less communication and 20x-30x less time than the state-of-the-art.

Citations (266)

Summary

  • The paper introduces a novel cryptographic framework that enables secure two-party inference for large-scale DNNs.
  • It presents new protocols for non-linear layers and fixed-point division, significantly enhancing communication efficiency and runtime.
  • Empirical evaluations show order-of-magnitude improvements on ImageNet-scale benchmarks compared to traditional garbled circuit approaches.

Practical 2-Party Secure Inference: A Cryptographic Framework for DNNs

The paper introduces a cryptographic framework for secure inference over realistic Deep Neural Networks (DNNs) utilizing secure two-party computation (2PC). The framework focuses on protocols that are both correct and efficient, outperforming existing protocols in terms of both latency and scale. As a notable achievement, the authors present the first secure inference over large-scale DNNs like ResNet50 and DenseNet121, which are significantly larger than those previously considered in the literature.

Key Contributions

The authors make several key contributions toward addressing the challenges of secure inference in practical machine learning tasks, such as those involving ImageNet-scale data:

  • New Protocols for Non-linear Layers: The framework introduces new protocols for computing the millionaires' problem and derivative of ReLU (DReLU) that allow for efficient and secure evaluation of non-linear neural network layers such as ReLU, Maxpool, and Argmax. The authors propose efficient techniques to achieve better communication complexity and round efficiency compared to prior works.
  • Efficient Division Protocols: The paper presents protocols for division, which are essential for implementing fixed-point arithmetic correctly in secure inference tasks. These protocols ensure that the results of the secure execution match the results of a cleartext execution, achieving correctness not present in competing methods.
  • Flexible Secure Inference System: Leveraging these protocols, the authors build a flexible system for Secure and Correct Inference (SCI) that supports computations using either homomorphic encryption (HE) for linear layers or oblivious transfer (OT). This flexibility enables the adaptation of the protocols to different network configurations effectively.

Technical Innovations

  1. Millionaires' Protocol: The authors provide a novel protocol for secure comparison based on the millionaires' problem, achieving significantly reduced communication complexity. This protocol is instrumental in evaluating ReLU and Maxpool layers securely.
  2. Fixed-point Arithmetic: A central aspect of their contribution is ensuring accurate secure inference execution for fixed-point arithmetic. This includes addressing the implementation of division operations that retain accuracy without widely divergent results.
  3. Truncation and Division: The paper provides closed-form expressions for division and truncation tasks, utilizing efficient cryptographic protocols. This innovation removes randomness from the error model compared to prior work, ensuring that the secure execution faithfully mimics a cleartext execution.

Empirical Evaluation

The authors demonstrate the practicality and scalability of their framework by conducting extensive empirical evaluations. Notably, secure inferences on state-of-the-art ImageNet-scale benchmarks are shown to complete in feasible runtimes on commodity hardware:

  • Comparison with Garbled Circuits: Experimental results indicate that their ReLU protocols are significantly faster and more communication-efficient than existing garbled circuit solutions.
  • Performance on State-of-the-art DNNs: In evaluating SqueezeNet, ResNet50, and DenseNet121, their system shows superior performance, often reducing communication and computation times by an order of magnitude compared to related works.

Implications and Future Directions

The introduction of these protocols has vital implications both practically and theoretically for secure machine learning. Practically, it enables the deployment of machine learning models in privacy-sensitive contexts, such as healthcare, with minimal risk to data privacy. Theoretically, it establishes new benchmarks for the efficiency and correctness of secure computations involving neural networks.

The research paves the way for further exploration into secure machine learning paradigms. Potential future developments include exploring three-party computation protocols, addressing malicious adversarial settings, and extending to secure training phases of machine learning models. Optimizing for different hardware architectures, such as specialized accelerators or distributed systems, could yield additional efficiencies in practical implementations. Ultimately, these efforts contribute to the broader vision of achieving secure and privacy-preserving machine learning as a standard practice.