Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey of Black-Box Adversarial Attacks on Computer Vision Models (1912.01667v3)

Published 3 Dec 2019 in cs.LG, cs.CR, cs.CV, and stat.ML

Abstract: Machine learning has seen tremendous advances in the past few years, which has lead to deep learning models being deployed in varied applications of day-to-day life. Attacks on such models using perturbations, particularly in real-life scenarios, pose a severe challenge to their applicability, pushing research into the direction which aims to enhance the robustness of these models. After the introduction of these perturbations by Szegedy et al. [1], significant amount of research has focused on the reliability of such models, primarily in two aspects - white-box, where the adversary has access to the targeted model and related parameters; and the black-box, which resembles a real-life scenario with the adversary having almost no knowledge of the model to be attacked. To provide a comprehensive security cover, it is essential to identify, study, and build defenses against such attacks. Hence, in this paper, we propose to present a comprehensive comparative study of various black-box adversarial attacks and defense techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Siddhant Bhambri (16 papers)
  2. Sumanyu Muku (2 papers)
  3. Avinash Tulasi (3 papers)
  4. Arun Balaji Buduru (47 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.