X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail (2306.08422v2)
Abstract: Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender's decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available.
- Adversarial example detection for dnn models: A review and experimental comparison. Artificial Intelligence Review (2022), 1–60.
- Amazon. Amazon shoplifting punishment detection 2022. theverge.com/2018/1/22/16920784/amazon-go-cashier-less-grocery-store-seattle-shoplifting-punishment-detection.
- Adversarial patch. arXiv preprint arXiv:1712.09665 (2017).
- Rethinking object detection in retail stores. In Proceedings of the AAAI Conference on Artificial Intelligence (2021), vol. 35, pp. 947–954.
- Cascade r-cnn: high quality object detection and instance segmentation. IEEE transactions on pattern analysis and machine intelligence 43, 5 (2019), 1483–1498.
- On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705 (2019).
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp) (2017), IEEE, pp. 39–57.
- A survey on adversarial attacks and defences. CAAI Transactions on Intelligence Technology 6, 1 (2021), 25–45.
- MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019).
- A survey on adversarial examples in deep learning. Journal on Big Data 2, 2 (2020), 71.
- Fashion meets computer vision: A survey. ACM Computing Surveys (CSUR) 54, 4 (2021), 1–41.
- Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors. In Proceedings of the 29th ACM International Conference on Multimedia (2021), pp. 1856–1865.
- Sentinet: Detecting localized universal attacks against deep learning systems. In 2020 IEEE Security and Privacy Workshops (SPW) (2020), IEEE, pp. 48–54.
- Federation, N. R. National retail security survey 2022. https://nrf.com/research/national-retail-security-survey-2022.
- When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In 2020 international joint conference on neural networks (IJCNN) (2020), IEEE, pp. 1–8.
- Forbes. Forbes shoplifting report 2022. https://forbes.com/sites/jiawertz/2022/11/20/shoplifting-has-become-a-100-billion-problem-for-retailers/?sh=679b9a282d62.
- Towards identification of packaged products via computer vision: Convolutional neural networks for object detection and image classification in retail environments. In Proceedings of the 9th International Conference on the Internet of Things (2019), pp. 1–8.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
- Green, K. M. Super-Big Market-Data: A Case Study, Walkthrough Approach to Amazon Go Cashierless Convenience Stores. PhD thesis, University of Illinois at Chicago, 2021.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770–778.
- Naturalistic physical adversarial patch for object detectors. In Proceedings of the IEEE/CVF International Conference on Computer Vision (2021), pp. 7848–7857.
- Adversarial yolo: Defense human detection patch attacks via detecting adversarial patches. arXiv preprint arXiv:2103.08860 (2021).
- Neural style transfer: A review. IEEE transactions on visualization and computer graphics 26, 11 (2019), 3365–3385.
- An effective motion object detection using adaptive background modeling mechanism in video surveillance system. Journal of Intelligent & Fuzzy Systems, Preprint (2021), 1–13.
- Vulnerable objects detection for autonomous driving: A review. Integration 78 (2021), 36–48.
- Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2020), pp. 9799–9808.
- On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897 (2019).
- Microsoft coco: Common objects in context. In European conference on computer vision (2014), Springer, pp. 740–755.
- Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022), pp. 14973–14982.
- Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299 (2018).
- Lowe, D. G. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60, 2 (2004), 91–110.
- No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501 (2017).
- Standard detectors aren’t (currently) fooled by physical adversarial stop signs. arXiv preprint arXiv:1710.03337 (2017).
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
- A survey of product recognition in shelf images. In 2017 International Conference on Computer Science and Engineering (UBMK) (2017), IEEE, pp. 145–150.
- Molnar, C. Interpretable machine learning. Lulu. com, 2020.
- Implementation of smart shopping cart using object detection method based on deep learning. Journal of the Korea Academia-Industrial cooperation Society 21, 7 (2020), 262–269.
- Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018).
- Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015).
- Explainable machine learning for scientific insights and discoveries. Ieee Access 8 (2020), 42200–42216.
- A comprehensive survey on computer vision based approaches for automatic identification of products in retail store. Image and Vision Computing 86 (2019), 45–63.
- Denial-of-service attack on object detection model using universal adversarial perturbation. arXiv preprint arXiv:2205.13618 (2022).
- Physical adversarial examples for object detectors. In 12th USENIX workshop on offensive technologies (WOOT 18) (2018).
- Object detection method for grasping robot based on improved yolov5. Micromachines 12, 11 (2021), 1273.
- Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (2019), pp. 0–0.
- Cascade rpn: Delving into high-quality region proposal network with adaptive convolution. Advances in neural information processing systems 32 (2019).
- Detectorguard: Provably securing object detectors against localized patch hiding attacks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (2021), pp. 3177–3196.
- Patchzero: Defending against adversarial patch attacks by detecting and zeroing the patch.
- Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).
- Ml-loo: Detecting adversarial examples with feature attribution. In Proceedings of the AAAI Conference on Artificial Intelligence (2020), vol. 34, pp. 6639–6647.
- The limitations of adversarial training and the blind-spot attack. arXiv preprint arXiv:1901.04684 (2019).
- You cannot easily catch me: A low-detectable adversarial patch for object detectors. arXiv preprint arXiv:2109.15177 (2021).
- The translucent patch: A physical and universal attack on object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021), pp. 15232–15241.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.