Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Vulnerabilities in Normally Trained Deep Learning Models (2211.15929v1)

Published 29 Nov 2022 in cs.CR and cs.LG

Abstract: We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Guanhong Tao (33 papers)
  2. Zhenting Wang (41 papers)
  3. Siyuan Cheng (41 papers)
  4. Shiqing Ma (56 papers)
  5. Shengwei An (14 papers)
  6. Yingqi Liu (28 papers)
  7. Guangyu Shen (21 papers)
  8. Zhuo Zhang (42 papers)
  9. Yunshu Mao (2 papers)
  10. Xiangyu Zhang (328 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com