Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fooling automated surveillance cameras: adversarial patches to attack person detection (1904.08653v1)

Published 18 Apr 2019 in cs.CV

Abstract: Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn "patches" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Simen Thys (1 paper)
  2. Wiebe Van Ranst (2 papers)
  3. Toon Goedemé (17 papers)
Citations (530)

Summary

  • The paper demonstrates that optimized adversarial patches significantly lower CNN-based person detection accuracy.
  • It employs a comprehensive objective integrating Non-Printability Score, Total Variation Loss, and Object Loss for robust patch creation.
  • Experimental results reveal that minimizing object scores dramatically drops recall rates, underscoring vulnerabilities in AI surveillance.

Adversarial Patches for Person Detection in Surveillance Systems

The research paper titled "Fooling automated surveillance cameras: adversarial patches to attack person detection" addresses the creation of adversarial patches intended to deceive object detection models, specifically targeting person detectors within automated surveillance systems. This paper advances the exploration of adversarial attacks by focusing on targets characterized by significant intra-class variability, namely, human figures, as opposed to static objects like stop signs used in previous studies.

Methodological Framework

The authors employed convolutional neural networks (CNNs), leveraging their ability to discern complex patterns to construct patches that significantly diminish the detection accuracy of models like YOLOv2. The intricacies of CNNs, central to this research, provide both the vulnerability that adversarial attacks exploit and the robustness that these attacks must overcome to maintain efficacy across diverse conditions.

Key to the methodology is the optimization of an adversarial patch via a comprehensive objective function. This function integrates several components:

  • Non-Printability Score (NPS): Ensures the colors are replicable by standard printers.
  • Total Variation Loss (TV): Promotes smoothness in the patch, reducing visual noise.
  • Object Loss (L_obj): Targets the detector's likelihood of identifying the patch as a person.

The adversarial patches are optimized using a dataset that includes real-world images to ensure the attacks maintain efficacy even when transformed by real-world environments, such as changes in lighting or angle, thereby increasing robustness.

Experimental Results

The authors evaluated various strategies for minimizing detection scores, assessing patches aimed at reducing classification probabilities of being identified as a person against minimizing general object detection scores. Quantitative analysis showed that minimizing object scores yielded the greatest drop in recall rates, indicating a higher success rate in evading detection. Precision-recall (PR) curves indicated that well-designed adversarial patches outperformed random noise significantly, validating the targeted approach's effectiveness.

Tabled results demonstrated a stark contrast between unperturbed detection models and those subjected to adversarial patches, emphasizing the potential threat posed by such vulnerabilities in systems relying on AI-based surveillance.

Implications and Future Directions

The implications of this research are notable for both the security domain and broader AI applications. The ability to covertly circumvent surveillance systems underscores a critical void in current AI robustness against adversarial manipulations. Practical considerations include the necessity for enhanced adversarial defense mechanisms within AI systems, particularly in security-sensitive applications.

Future research might focus on enhancing patch robustness through advanced transformations, improving transferability across different detection architectures like Faster R-CNN, or exploring dynamic adversarial systems like adversarial clothing. Such investigations could lead to broader, more resilient defense strategies against adversarial threats, helping secure AI systems in diverse operational contexts.

In conclusion, this paper contributes a meaningful step toward understanding and disrupting the reliability of AI-driven surveillance through adversarial means. It opens avenues for developing countermeasures and further exploring the expansiveness of adversarial attack vectors against highly variable targets such as humans.

Youtube Logo Streamline Icon: https://streamlinehq.com