Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency (2007.09763v2)

Published 19 Jul 2020 in cs.CV and cs.LG

Abstract: There has been a recent surge in research on adversarial perturbations that defeat Deep Neural Networks (DNNs) in machine vision; most of these perturbation-based attacks target object classifiers. Inspired by the observation that humans are able to recognize objects that appear out of place in a scene or along with other unlikely objects, we augment the DNN with a system that learns context consistency rules during training and checks for the violations of the same during testing. Our approach builds a set of auto-encoders, one for each object class, appropriately trained so as to output a discrepancy between the input and output if an added adversarial perturbation violates context consistency rules. Experiments on PASCAL VOC and MS COCO show that our method effectively detects various adversarial attacks and achieves high ROC-AUC (over 0.95 in most cases); this corresponds to over 20% improvement over a state-of-the-art context-agnostic method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Shasha Li (57 papers)
  2. Shitong Zhu (8 papers)
  3. Sudipta Paul (12 papers)
  4. Amit Roy-Chowdhury (10 papers)
  5. Chengyu Song (33 papers)
  6. Srikanth Krishnamurthy (4 papers)
  7. Ananthram Swami (97 papers)
  8. Kevin S Chan (3 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.