Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advances in adversarial attacks and defenses in computer vision: A survey (2108.00401v2)

Published 1 Aug 2021 in cs.CV, cs.CR, cs.CY, and cs.LG

Abstract: Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Naveed Akhtar (77 papers)
  2. Ajmal Mian (136 papers)
  3. Navid Kardan (7 papers)
  4. Mubarak Shah (208 papers)
Citations (200)

Summary

Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey

The paper "Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey" by Akhtar, Mian, Kardan, and Shah offers an extensive review of developments in adversarial machine learning within the field of computer vision, updating and building upon their earlier work. This survey primarily focuses on the progress made in this dynamic field post-2018, highlighting both adversarial attack techniques and defense mechanisms.

Conventional adversarial attacks exploit vulnerabilities in Deep Learning models by introducing small perturbations to input images, causing these models to produce erroneous predictions. Originating from the realization of such vulnerabilities in 2013, this area has swiftly gathered interest and research momentum. The paper addresses the core advancements in adversarial methods, such as gradient-based attacks, which are grounded in perturbing inputs through gradient ascent on the model's loss surface. The paper also details recent enhancements like the PGD, FGSM, and their iterative variants, which play a foundational role in crafting attacks and have inspired a plethora of subsequent methods focused on efficiency and transferability.

A substantial portion of the discussion is dedicated to the emergent techniques in black-box attacks, which operate without internal knowledge of the targeted model. This includes advancements in transfer-based and query-based strategies. The survey notes the practical challenges associated with physical world attacks and the efforts to devise robust mechanisms to recover from adversarial examples. Physical world attack strategies, such as those involving adversarial patches and camouflages, offer insights into the realistic application scenarios and limitations when deploying models outside controlled lab environments.

The paper explores the complexities of defenses, emphasizing adversarial training as a robust method against such perturbations. Adversarial training involves actively incorporating adversarial examples during model training. This method, underpinned by robust optimization, is recognized for its efficacy in enhancing model resilience against both known and unknown attacks. The survey further highlights challenges of balancing model robustness with accuracy on clean datasets, a recurring theme in many defended models.

Additionally, "certified defenses" have emerged as a promising direction, aiming to provide guarantees of robustness against a range of adversarial perturbations. These theoretically grounded approaches seek to deliver assurances on the minimum perturbation size required to mislead a model, thus furnishing a measure of security beyond empirical defense.

The authors provide a well-rounded discourse on the theoretical explorations into why adversarial examples exist, touching upon hypotheses like model linearity, high-dimensional decision boundaries, and non-robust features. Despite significant research efforts, a universal explanation remains elusive, and these adversarial vulnerabilities continue to intrigue researchers, fueling ongoing investigations.

In conclusion, the paper offers a comprehensive overview of the adversarial landscape in computer vision, indicating that the quest to secure deep learning models against adversarial threats is far from over. As AI and deep learning applications expand into critical domains, ensuring their robustness against adversarial exploits is paramount. Consequently, both innovative attack strategies and resilient defense mechanisms will continue to evolve, potentially informing broader AI application domains beyond computer vision. This survey, through its detailed enumeration of current advancements, serves as a vital resource for researchers and practitioners aiming to grasp the current state and emerging trends in adversarial machine learning.