Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MEAOD: Model Extraction Attack against Object Detectors (2312.14677v1)

Published 22 Dec 2023 in cs.CR and cs.AI

Abstract: The widespread use of deep learning technology across various industries has made deep neural network models highly valuable and, as a result, attractive targets for potential attackers. Model extraction attacks, particularly query-based model extraction attacks, allow attackers to replicate a substitute model with comparable functionality to the victim model and present a significant threat to the confidentiality and security of MLaaS platforms. While many studies have explored threats of model extraction attacks against classification models in recent years, object detection models, which are more frequently used in real-world scenarios, have received less attention. In this paper, we investigate the challenges and feasibility of query-based model extraction attacks against object detection models and propose an effective attack method called MEAOD. It selects samples from the attacker-possessed dataset to construct an efficient query dataset using active learning and enhances the categories with insufficient objects. We additionally improve the extraction effectiveness by updating the annotations of the query dataset. According to our gray-box and black-box scenarios experiments, we achieve an extraction performance of over 70% under the given condition of a 10k query budget.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877–1901.
  2. Active Learning for Deep Object Detection. In Trémeau, A.; Farinella, G. M.; and Braz, J., eds., Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Volume 5: VISAPP, Prague, Czech Republic, February 25-27, 2019, 181–190. SciTePress.
  3. Anchor-free oriented proposal generator for object detection. IEEE Transactions on Geoscience and Remote Sensing, 60: 1–11.
  4. Recommendation system based on deep learning methods: a systematic review and new directions. Artificial Intelligence Review, 53(4): 2709–2748.
  5. Adversarial Model Extraction on Graph Neural Networks. arXiv:1912.07721.
  6. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12873–12883.
  7. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111: 98–136.
  8. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 580–587.
  9. Girshick, R. B. 2015. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, 1440–1448. IEEE Computer Society.
  10. Generative adversarial networks. Communications of the ACM, 63(11): 139–144.
  11. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3): 362–386.
  12. A survey on deep learning based face recognition. Computer vision and image understanding, 189: 102805.
  13. Stealing Links from Graph Neural Networks. In USENIX Security Symposium, 2669–2686.
  14. Stealing machine learning models: Attacks and countermeasures for generative adversarial networks. In Annual Computer Security Applications Conference, 1–16.
  15. ultralytics/yolov5: v5. 0-YOLOv5-P6 1280 models, AWS, Supervise. ly and YouTube integrations. Zenodo.
  16. PRADA: protecting against DNN model stealing attacks. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P), 512–527. IEEE.
  17. MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 13814–13823. Computer Vision Foundation / IEEE.
  18. CornerNet: Detecting Objects as Paired Keypoints. Int. J. Comput. Vis., 128(3): 642–656.
  19. On the Feasibility of Specialized Ability Extracting for Large Language Code Models. arXiv:2303.03012.
  20. Imitated Detectors: Stealing Knowledge of Black-box Object Detectors. In Magalhães, J.; Bimbo, A. D.; Satoh, S.; Sebe, N.; Alameda-Pineda, X.; Jin, Q.; Oria, V.; and Toni, L., eds., MM ’22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10 - 14, 2022, 4839–4847. ACM.
  21. Microsoft COCO: Common Objects in Context. In Fleet, D. J.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, 740–755. Springer.
  22. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, 2980–2988.
  23. Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, 21–37. Springer.
  24. StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2115–2128.
  25. Motional. 2020a. nuImages dataset. https://www.nuscenes.org/nuimages. Accessed: 2022-10-01.
  26. Motional. 2020b. nuScenes dataset. https://www.nuscenes.org/nuscenes. Accessed: 2022-10-01.
  27. Knockoff Nets: Stealing Functionality of Black-Box Models. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 4954–4963. Computer Vision Foundation / IEEE.
  28. Activethief: Model extraction using active learning and unannotated public data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 865–872.
  29. Active Learning for Object Detection with Evidential Deep Learning and Hierarchical Uncertainty Aggregation. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
  30. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR.
  31. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788.
  32. YOLOv3: An Incremental Improvement. arXiv:1804.02767.
  33. A survey of deep active learning. ACM computing surveys (CSUR), 54(9): 1–40.
  34. Towards Data-Free Model Stealing in a Hard Label Setting. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, 15263–15272. IEEE.
  35. Can’t Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16373–16383.
  36. Model stealing attacks against inductive graph neural networks. In 2022 IEEE Symposium on Security and Privacy (SP), 1175–1192. IEEE.
  37. Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models. arXiv:2104.12623.
  38. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, 9627–9636.
  39. Stealing Machine Learning Models via Prediction APIs. In USENIX security symposium, volume 16, 601–618.
  40. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7464–7475.
  41. Model extraction attacks on graph neural networks: Taxonomy and realisation. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, 337–350.
  42. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2636–2645.
  43. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples. In 27th Annual Network and Distributed System Security Symposium, NDSS 2020, San Diego, California, USA, February 23-26, 2020. The Internet Society.
  44. Es attack: Model stealing against deep neural networks without data hurdles. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(5): 1258–1270.
  45. Dast: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 234–243.
  46. Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 850–859.
Citations (1)

Summary

We haven't generated a summary for this paper yet.