Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Odyssey: Creation, Analysis and Detection of Trojan Models (2007.08142v2)

Published 16 Jul 2020 in cs.CV

Abstract: Along with the success of deep neural network (DNN) models, rise the threats to the integrity of these models. A recent threat is the Trojan attack where an attacker interferes with the training pipeline by inserting triggers into some of the training samples and trains the model to act maliciously only for samples that contain the trigger. Since the knowledge of triggers is privy to the attacker, detection of Trojan networks is challenging. Existing Trojan detectors make strong assumptions about the types of triggers and attacks. We propose a detector that is based on the analysis of the intrinsic DNN properties; that are affected due to the Trojaning process. For a comprehensive analysis, we develop Odysseus, the most diverse dataset to date with over 3,000 clean and Trojan models. Odysseus covers a large spectrum of attacks; generated by leveraging the versatility in trigger designs and source to target class mappings. Our analysis results show that Trojan attacks affect the classifier margin and shape of decision boundary around the manifold of clean data. Exploiting these two factors, we propose an efficient Trojan detector that operates without any knowledge of the attack and significantly outperforms existing methods. Through a comprehensive set of experiments we demonstrate the efficacy of the detector on cross model architectures, unseen Triggers and regularized models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Marzieh Edraki (6 papers)
  2. Nazmul Karim (21 papers)
  3. Nazanin Rahnavard (43 papers)
  4. Ajmal Mian (136 papers)
  5. Mubarak Shah (208 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.