Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GanFinger: GAN-Based Fingerprint Generation for Deep Neural Network Ownership Verification (2312.15617v1)

Published 25 Dec 2023 in cs.CR and cs.CV

Abstract: Deep neural networks (DNNs) are extensively employed in a wide range of application scenarios. Generally, training a commercially viable neural network requires significant amounts of data and computing resources, and it is easy for unauthorized users to use the networks illegally. Therefore, network ownership verification has become one of the most crucial steps in safeguarding digital assets. To verify the ownership of networks, the existing network fingerprinting approaches perform poorly in the aspects of efficiency, stealthiness, and discriminability. To address these issues, we propose a network fingerprinting approach, named as GanFinger, to construct the network fingerprints based on the network behavior, which is characterized by network outputs of pairs of original examples and conferrable adversarial examples. Specifically, GanFinger leverages Generative Adversarial Networks (GANs) to effectively generate conferrable adversarial examples with imperceptible perturbations. These examples can exhibit identical outputs on copyrighted and pirated networks while producing different results on irrelevant networks. Moreover, to enhance the accuracy of fingerprint ownership verification, the network similarity is computed based on the accuracy-robustness distance of fingerprint examples'outputs. To evaluate the performance of GanFinger, we construct a comprehensive benchmark consisting of 186 networks with five network structures and four popular network post-processing techniques. The benchmark experiments demonstrate that GanFinger significantly outperforms the state-of-the-arts in efficiency, stealthiness, and discriminability. It achieves a remarkable 6.57 times faster in fingerprint generation and boosts the ARUC value by 0.175, resulting in a relative improvement of about 26%.

GanFinger: GAN-Based Fingerprint Generation for DNN Ownership Verification

The paper "GanFinger: GAN-Based Fingerprint Generation for Deep Neural Network Ownership Verification" presents a novel approach leveraging Generative Adversarial Networks (GANs) to generate network fingerprints for the purpose of intellectual property (IP) protection of Deep Neural Networks (DNNs). Given the increasing deployment of DNNs in commercial applications, safeguarding the IP of these models has become paramount. This paper addresses three major shortcomings of existing fingerprinting methods, namely efficiency, stealthiness, and discriminability, by introducing the GanFinger framework.

Introduction

The paper opens by underlining the challenges in securing DNN IP in an era where model reusability and sharing through Machine Learning as a Service (MLaaS) are common. Current methods, which include network watermarking and fingerprinting, face limitations. Watermarking typically involves embedding a pre-designed signature into the network parameters, thereby potentially inhibiting network performance and being susceptible to various modifications. Conversely, fingerprinting techniques extract intrinsic characteristics from the network, presenting a non-intrusive alternative.

Contributions

The paper's contributions are outlined as follows:

  1. Efficiency: GanFinger significantly reduces the time required for fingerprint generation by leveraging GANs. It achieves a speed-up factor of approximately 6.57 compared to the best-performing existing model.
  2. Stealthiness: The generated fingerprints involve pairs of original and conferrable adversarial examples, which are hard to distinguish from natural data and thus enhance stealthiness.
  3. Discriminability: Introducing the accuracy-robustness distance (ARD) metric, GanFinger can effectively differentiate between pirated networks and irrelevant ones, reducing the risk of false positives.

Methodology

GanFinger operates in three phases: Network Preparation, Fingerprint Generation, and Verification.

  1. Network Preparation: The authors prepare the networks by categorizing them into victim networks, positive networks (pirated), and negative networks (irrelevant). Each type serves a distinct role in the generation and validation of fingerprints.
  2. Fingerprint Generation: This process involves a generator and a discriminator within the GAN framework. The aim is to create conferrable adversarial examples that exhibit similar misclassifications on both pirated and victim networks but differ on irrelevant networks. This ensures that the generated fingerprints are unique to the victim networks and cannot be easily mimicked.
  3. Verification: The proposed ARD metric assesses the similarity between the victim and suspicious networks by comparing accuracy inconsistency and robustness consistency of fingerprint pairs. The ARD metric forms the basis of the ownership verification strategy and is used to classify networks as either pirated or irrelevant.

Experimental Evaluation

The performance of GanFinger was measured against several state-of-the-art (SOTA) methods: IPGuard, ConferAE, DeepFoolFP, and MetaFinger. A comprehensive benchmark with 186 networks trained and tested on CIFAR-10 was constructed to validate the robustness and effectiveness of GanFinger.

  1. Effectiveness: GanFinger demonstrated a significant enhancement in ARUC value, achieving an improvement of about 26% over MetaFinger. This indicates its robust capability in distinguishing pirated networks from irrelevant ones.
  2. Efficiency: GanFinger's fingerprint generation process was substantially faster, as highlighted in Table \ref{table:3}. This efficiency is critical for practical deployment where speed of verification is paramount.
  3. Stealthiness: The adversarial examples generated were visually indistinguishable from original examples, ensuring that defenders can discreetly verify network ownership without alerting the attackers.
  4. Robustness: GanFinger maintained high ARD values under various post-processing attacks including fine-tuning, pruning, and adversarial training, demonstrating its resilience.

Future Directions

The implications of GanFinger are twofold:

  1. Practical Implications: This method allows for secure IP protection of DNNs deployed in commercial settings. It ensures that network owners can verify their model's authenticity over potentially plagiarized copies with minimal overhead.
  2. Theoretical Implications: GanFinger introduces a novel application of GANs in the domain of network security. By using the ARD metric, it sets a precedent for how similarity measures can be effectively utilized in model verification.

Conclusion

The GanFinger framework represents a significant advancement in the field of DNN ownership verification. By addressing the critical aspects of efficiency, stealthiness, and discriminability, GanFinger not only enhances the practical security of DNN IP but also contributes novel methodologies to the theoretical framework of model fingerprinting. Future research could explore extending this approach to different types of DNN architectures and further optimizing the robustness of generated fingerprints under more diverse attack vectors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. In Enck, W.; and Felt, A. P., eds., 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 15-17, 2018, 1615–1631. USENIX Association.
  2. IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. In Cao, J.; Au, M. H.; Lin, Z.; and Yung, M., eds., ASIA CCS ’21: ACM Asia Conference on Computer and Communications Security, Virtual Event, Hong Kong, June 7-11, 2021, 14–25. ACM.
  3. Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models. In 43rd IEEE Symposium on Security and Privacy, SP 2022, San Francisco, CA, USA, May 22-26, 2022, 824–841. IEEE.
  4. Generative Adversarial Nets. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, 2672–2680.
  5. Explaining and Harnessing Adversarial Examples. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
  6. Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks. In NeurIPS.
  7. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 770–778. IEEE Computer Society.
  8. DenseNet: Implementing Efficient ConvNet Descriptor Pyramids. CoRR, abs/1404.1869.
  9. High Accuracy and High Fidelity Extraction of Neural Networks. In Capkun, S.; and Roesner, F., eds., 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, 1345–1362. USENIX Association.
  10. A Novel Verifiable Fingerprinting Scheme for Generative Adversarial Networks. arXiv preprint arXiv:2106.11760.
  11. SoK: How Robust is Image Classification Deep Neural Network Watermarking? In 43rd IEEE Symposium on Security and Privacy, SP 2022, San Francisco, CA, USA, May 22-26, 2022, 787–804. IEEE.
  12. Deep Neural Network Fingerprinting by Conferrable Adversarial Examples. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
  13. Traffic Sign Recognition Using a Multi-Task Convolutional Neural Network. IEEE Trans. Intell. Transp. Syst., 19(4): 1100–1111.
  14. Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 13420–13429. IEEE.
  15. Press, G. 2016. Cleaning big data: Most time-consuming, least enjoyable data science task, survey says. Forbes, March, 23: 15.
  16. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
  17. Stealing Machine Learning Models via Prediction APIs. In Holz, T.; and Savage, S., eds., 25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016, 601–618. USENIX Association.
  18. Embedding Watermarks into Deep Neural Networks. In Ionescu, B.; Sebe, N.; Feng, J.; Larson, M. A.; Lienhart, R.; and Snoek, C., eds., Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ICMR 2017, Bucharest, Romania, June 6-9, 2017, 269–277. ACM.
  19. Stealing Hyperparameters in Machine Learning. In 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA, 36–52. IEEE Computer Society.
  20. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 5265–5274. Computer Vision Foundation / IEEE Computer Society.
  21. Fingerprinting Deep Neural Networks - a DeepFool Approach. In IEEE International Symposium on Circuits and Systems, ISCAS 2021, Daegu, South Korea, May 22-28, 2021, 1–5. IEEE.
  22. Generating Adversarial Examples with Adversarial Networks. In Lang, J., ed., Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, 3905–3911. ijcai.org.
  23. InFIP: An Explainable DNN Intellectual Property Protection Method based on Intrinsic Features. CoRR, abs/2210.07481.
  24. MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training. In Raedt, L. D., ed., Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, 776–782. ijcai.org.
  25. Medical image classification using synergic deep learning. Medical Image Anal., 54: 10–19.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Huali Ren (2 papers)
  2. Anli Yan (3 papers)
  3. Xiaojun Ren (4 papers)
  4. Pei-Gen Ye (2 papers)
  5. Chong-zhi Gao (2 papers)
  6. Zhili Zhou (17 papers)
  7. Jin Li (366 papers)