Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
117 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fragile Model Watermark for integrity protection: leveraging boundary volatility and sensitive sample-pairing (2404.07572v3)

Published 11 Apr 2024 in cs.CR and cs.AI

Abstract: Neural networks have increasingly influenced people's lives. Ensuring the faithful deployment of neural networks as designed by their model owners is crucial, as they may be susceptible to various malicious or unintentional modifications, such as backdooring and poisoning attacks. Fragile model watermarks aim to prevent unexpected tampering that could lead DNN models to make incorrect decisions. They ensure the detection of any tampering with the model as sensitively as possible.However, prior watermarking methods suffered from inefficient sample generation and insufficient sensitivity, limiting their practical applicability. Our approach employs a sample-pairing technique, placing the model boundaries between pairs of samples, while simultaneously maximizing logits. This ensures that the model's decision results of sensitive samples change as much as possible and the Top-1 labels easily alter regardless of the direction it moves.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. “Protecting intellectual property of eeg-based model with watermarking,” in 2023 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2023, pp. 37–42.
  2. “Sensitive-sample fingerprinting of deep neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4729–4737.
  3. “A survey on neural trojans,” in 2020 21st International Symposium on Quality Electronic Design (ISQED). IEEE, 2020, pp. 33–39.
  4. “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  5. “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
  6. “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, 2014.
  7. “Robustmq: benchmarking robustness of quantized models,” Visual Intelligence, vol. 1, no. 1, pp. 30, 2023.
  8. “Mlaas: Machine learning as a service,” in 2015 IEEE 14th international conference on machine learning and applications (ICMLA). IEEE, 2015, pp. 896–902.
  9. “Aid: Attesting the integrity of deep neural networks,” in 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021, pp. 19–24.
  10. “Decision-based iterative fragile watermarking for model integrity verification,” arXiv preprint arXiv:2305.09684, 2023.
  11. “Advfas: A robust face anti-spoofing framework against adversarial examples,” Computer Vision and Image Understanding, vol. 235, pp. 103779, 2023.
  12. “Deep sparse rectifier neural networks,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2011, pp. 315–323.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com