Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network (2107.07043v1)

Published 9 Jul 2021 in cs.LG, cs.AI, and cs.CR

Abstract: Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. Recently, a number of deep testing methods in software engineering were proposed to find the vulnerability of DNN systems, and one of them, i.e., Model Mutation Testing (MMT), was used to successfully detect various adversarial samples generated by different kinds of adversarial attacks. However, the mutated models in MMT are always huge in number (e.g., over 100 models) and lack diversity (e.g., can be easily circumvented by high-confidence adversarial samples), which makes it less efficient in real applications and less effective in detecting high-confidence adversarial samples. In this study, we propose Graph-Guided Testing (GGT) for adversarial sample detection to overcome these aforementioned challenges. GGT generates pruned models with the guide of graph characteristics, each of them has only about 5% parameters of the mutated model in MMT, and graph guided models have higher diversity. The experiments on CIFAR10 and SVHN validate that GGT performs much better than MMT with respect to both effectiveness and efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zuohui Chen (10 papers)
  2. Renxuan Wang (2 papers)
  3. Jingyang Xiang (11 papers)
  4. Yue Yu (343 papers)
  5. Xin Xia (171 papers)
  6. Shouling Ji (136 papers)
  7. Qi Xuan (113 papers)
  8. Xiaoniu Yang (38 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.