Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks (2403.12988v1)

Published 4 Mar 2024 in cs.CV and cs.AI

Abstract: Adversarial patch attacks, crafted to compromise the integrity of Deep Neural Networks (DNNs), significantly impact AI systems designed for object detection and classification tasks. The primary purpose of this work is to defend models against real-world physical attacks that target object detection and classification. We analyze attack techniques and propose a robust defense approach. We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position. Leveraging the inpainting pre-processing technique, we effectively restore the original confidence levels, demonstrating the importance of robust defenses in mitigating these threats. Following fine-tuning of an AI model for traffic sign classification, we subjected it to a simulated pixelized patch-based physical adversarial attack, resulting in misclassifications. Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks. This contribution advances the resilience and reliability of object detection and classification networks against adversarial challenges, providing a robust foundation for critical applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Roie Kazoom (2 papers)
  2. Raz Birman (5 papers)
  3. Ofer Hadar (11 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.