Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators (1803.05847v2)

Published 5 Mar 2018 in cs.CV and cs.LG

Abstract: Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacy-sensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the attack on an FPGA-based convolutional neural network accelerator and we manage to recover the input image from the collected power traces without knowing the detailed parameters in the neural network. For the MNIST dataset, our power side-channel attack is able to achieve up to 89% recognition accuracy.

Citations (190)

Summary

Overview of Power Side-Channel Attack on CNN Accelerators

The paper "I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators" presents an exploration into the security vulnerabilities of FPGA-based CNN accelerators from a side-channel standpoint. Specifically, the paper hypothesizes and verifies the potential for a power side-channel attack that can recover input images processed by these accelerators, a significant concern given the sensitive nature of many deep learning applications.

The investigation is grounded in the increasing reliance on CNNs for complex tasks necessitating fast computation and energy efficiency, typically achieved using specialized hardware like FPGAs. While CNNs have well-documented security risks when model parameters are accessible, this work demonstrates that even without direct knowledge of these parameters, critical data leakage vulnerabilities exist during the inference phase.

Methodology

The core of the paper's methodology hinges on collecting and analyzing power consumption data from FPGA-based CNN accelerators as they process inputs. The attack targets the convolutional layers, where the inference task is performed. This choice is strategic due to the inherent predictability and locality of data access patterns in such layers, which can be indirectly inferred from power consumption even in the absence of direct access to the model's weights or network communication channels.

The paper is split into two attack scenarios to represent potential adversaries:

  1. Background Detection:
    • A passive adversary scenario where power trace data is used to differentiate between regions of an input image (i.e., distinguish background from foreground).
    • This method does not require the adversary to have profiling capabilities before executing the attack.
  2. Image Reconstruction via Power Template:
    • An active adversary scenario where profiling of the device is performed using known input patterns to construct a "power template".
    • This template is then used during the attack phase to reconstruct pixel values with higher fidelity than the mere background distinction.

Results

For empirical validation, the paper employs the MNIST dataset as a test case. The authors were able to show successful reconstruction of input images with considerable accuracy both at the pixel level and in terms of recognizable shape and form. Specifically, using power side-channel information, the attack achieved up to 89% recognition accuracy on the MNIST dataset's input images, signifying significant privacy risks.

Implications and Future Directions

This research highlights a crucial security dimension in hardware accelerators widely regarded as efficient computation solutions for deep learning workloads. The findings suggest that FPGA-based accelerators expose critical side-channel leaks that could lead to substantial privacy breaches. As such, the paper calls for a reevaluation of security models for AI systems, augmenting traditional software-based defenses with hardware-centric approaches.

Potential mitigation strategies may include noise injection to obscure power signals or restructuring of computation to prevent predictable power consumption patterns. Further research could explore the applicability of these side-channel attacks to other architectures like ASICs or GPUs, broadening the scope of known vulnerabilities in AI hardware.

The work predicates future development directions that emphasize the convergence of hardware design and security to preemptively address such vulnerabilities, fostering safe AI deployments across privacy-sensitive domains.