Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer (2208.06592v1)

Published 13 Aug 2022 in cs.CR and cs.CV

Abstract: Backdoor attacks have been shown to be a serious security threat against deep learning models, and detecting whether a given model has been backdoored becomes a crucial task. Existing defenses are mainly built upon the observation that the backdoor trigger is usually of small size or affects the activation of only a few neurons. However, the above observations are violated in many cases especially for advanced backdoor attacks, hindering the performance and applicability of the existing defenses. In this paper, we propose a backdoor defense DTInspector built upon a new observation. That is, an effective backdoor attack usually requires high prediction confidence on the poisoned training samples, so as to ensure that the trained model exhibits the targeted behavior with a high probability. Based on this observation, DTInspector first learns a patch that could change the predictions of most high-confidence data, and then decides the existence of backdoor by checking the ratio of prediction changes after applying the learned patch on the low-confidence data. Extensive evaluations on five backdoor attacks, four datasets, and three advanced attacking types demonstrate the effectiveness of the proposed defense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tong Wang (144 papers)
  2. Yuan Yao (292 papers)
  3. Feng Xu (180 papers)
  4. Miao Xu (43 papers)
  5. Shengwei An (14 papers)
  6. Ting Wang (213 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.