Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Invisible Mask: Practical Attacks on Face Recognition with Infrared (1803.04683v1)

Published 13 Mar 2018 in cs.CR

Abstract: Accurate face recognition techniques make a series of critical applications possible: policemen could employ it to retrieve criminals' faces from surveillance video streams; cross boarder travelers could pass a face authentication inspection line without the involvement of officers. Nonetheless, when public security heavily relies on such intelligent systems, the designers should deliberately consider the emerging attacks aiming at misleading those systems employing face recognition. We propose a kind of brand new attack against face recognition systems, which is realized by illuminating the subject using infrared according to the adversarial examples worked out by our algorithm, thus face recognition systems can be bypassed or misled while simultaneously the infrared perturbations cannot be observed by raw eyes. Through launching this kind of attack, an attacker not only can dodge surveillance cameras. More importantly, he can impersonate his target victim and pass the face authentication system, if only the victim's photo is acquired by the attacker. Again, the attack is totally unobservable by nearby people, because not only the light is invisible, but also the device we made to launch the attack is small enough. According to our study on a large dataset, attackers have a very high success rate with a over 70\% success rate for finding such an adversarial example that can be implemented by infrared. To the best of our knowledge, our work is the first one to shed light on the severity of threat resulted from infrared adversarial examples against face recognition.

Citations (97)

Summary

  • The paper introduces an innovative attack using infrared light to generate adversarial perturbations that mislead face recognition systems undetectably.
  • The paper demonstrates a 100% success rate in dodging and over 70% in impersonation, using practical consumer-grade hardware.
  • The paper details a LED-based setup and calibration algorithm, highlighting critical security vulnerabilities in current facial recognition technologies.

Invisible Mask: Practical Attacks on Face Recognition with Infrared

The paper "Invisible Mask: Practical Attacks on Face Recognition with Infrared" investigates vulnerabilities in face recognition (FR) systems using deep learning (DL), specifically targeting these systems with an innovative method involving infrared light. The authors explore and demonstrate effective adversarial attacks that can cause misrecognition in FR systems without visible alterations detectable to human observers, thus underscoring a new vector of security vulnerabilities.

Overview

The authors present a novel attack technique utilizing infrared light to create adversarial perturbations, thereby misleading FR systems. This approach distinguishes itself by ensuring the perturbations are invisible to the human eye and can be practically implemented using a small and inconspicuous device. The attack leverages the camera sensors' sensitivity to infrared light, making the adversarial examples imperceptible to nearby observers.

Major Claims

  1. Stealthy Facial Morphing with Infrared: The paper contributes the first reported method that uses infrared to obscure one's appearance or impersonate others within FR systems. It demonstrates a high success rate, notably achieving a 100% success rate in dodging and over 70% in impersonation scenarios.
  2. Development of a New Algorithm: The authors present an algorithm capable of finding adversarial examples within the constraints of consumer-level hardware, specifically infrared LEDs. The algorithm effectively searches for adversarial perturbations that can reproduce these examples using physical devices.
  3. Implementation and Evaluation: The researchers conducted extensive experiments with crafted devices to implement their adversarial examples against a popular FR system, FaceNet. Their findings encompass successful dodging and impersonation attacks, showcasing the real-world viability of their technique.

Methodological Insights

The attack's practicality stems from using a product that emulates tiny, inconspicuous LED lights installed on a peak cap, which emits infrared light toward the user's face. An algorithm is employed to determine the optimal placement and intensity of these infrared lights, converting them into adversarial examples that FR systems cannot correctly interpret. Further, a calibration tool is developed to assist attackers in adjusting the LEDs' parameters flexibly, aligning the physical setup with the algorithmic adversarial example's specifications.

Implications and Future Directions

The implications of the paper are significant for both the FR technology landscape and adversarial ML research. By highlighting these vulnerabilities, the work urges a re-evaluation of face-centric security deployments, particularly in contexts like surveillance and authentication where reliability is paramount.

On a theoretical level, the research pushes the boundaries of adversarial learning's applicability to non-visible spectrums. Practically, it necessitates integrating new defense mechanisms that can preemptively detect and mitigate such adversarial inputs.

In future works, expanding these attacks to black-box models could elevate their relevance, as real-world attackers may not have white-box access to deployed systems. Additionally, improvements in IR spot modeling or leveraging IR projectors could dramatically enhance their adaptability and success rates.

The findings illuminate a critical need for innovation in securing FR systems and further delineate the complexities in safeguarding AI architectures against subtle yet effective adversarial attacks.

Youtube Logo Streamline Icon: https://streamlinehq.com