Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation (2103.10274v1)

Published 18 Mar 2021 in cs.LG

Abstract: Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model. Detection of backdoors in trained models without access to the training data or example triggers is an important open problem. In this paper, we identify an interesting property of these models: adversarial perturbations transfer from image to image more readily in poisoned models than in clean models. This holds for a variety of model and trigger types, including triggers that are not linearly separable from clean data. We use this feature to detect poisoned models in the TrojAI benchmark, as well as additional models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Todd Huster (8 papers)
  2. Emmanuel Ekwedike (3 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.