Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Defense against Backdoor Attacks via Identifying and Purifying Bad Neurons (2208.06537v1)

Published 13 Aug 2022 in cs.LG, cs.CR, and cs.CV

Abstract: The opacity of neural networks leads their vulnerability to backdoor attacks, where hidden attention of infected neurons is triggered to override normal predictions to the attacker-chosen ones. In this paper, we propose a novel backdoor defense method to mark and purify the infected neurons in the backdoored neural networks. Specifically, we first define a new metric, called benign salience. By combining the first-order gradient to retain the connections between neurons, benign salience can identify the infected neurons with higher accuracy than the commonly used metric in backdoor defense. Then, a new Adaptive Regularization (AR) mechanism is proposed to assist in purifying these identified infected neurons via fine-tuning. Due to the ability to adapt to different magnitudes of parameters, AR can provide faster and more stable convergence than the common regularization mechanism in neuron purifying. Extensive experimental results demonstrate that our method can erase the backdoor in neural networks with negligible performance degradation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mingyuan Fan (35 papers)
  2. Yang Liu (2253 papers)
  3. Cen Chen (81 papers)
  4. Ximeng Liu (45 papers)
  5. Wenzhong Guo (23 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.