Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Preventing Distillation-based Attacks on Neural Network IP (2204.00292v1)

Published 1 Apr 2022 in cs.CR, cs.AR, and cs.LG

Abstract: Neural networks (NNs) are already deployed in hardware today, becoming valuable intellectual property (IP) as many hours are invested in their training and optimization. Therefore, attackers may be interested in copying, reverse engineering, or even modifying this IP. The current practices in hardware obfuscation, including the widely studied logic locking technique, are insufficient to protect the actual IP of a well-trained NN: its weights. Simply hiding the weights behind a key-based scheme is inefficient (resource-hungry) and inadequate (attackers can exploit knowledge distillation). This paper proposes an intuitive method to poison the predictions that prevent distillation-based attacks; this is the first work to consider such a poisoning approach in hardware-implemented NNs. The proposed technique obfuscates a NN so an attacker cannot train the NN entirely or accurately. We elaborate a threat model which highlights the difference between random logic obfuscation and the obfuscation of NN IP. Based on this threat model, our security analysis shows that the poisoning successfully and significantly reduces the accuracy of the stolen NN model on various representative datasets. Moreover, the accuracy and prediction distributions are maintained, no functionality is disturbed, nor are high overheads incurred. Finally, we highlight that our proposed approach is flexible and does not require manipulation of the NN toolchain.

Citations (1)

Summary

We haven't generated a summary for this paper yet.