Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning (2303.01742v1)

Published 3 Mar 2023 in cs.CR and cs.CL

Abstract: At present, backdoor attacks attract attention as they do great harm to deep learning models. The adversary poisons the training data making the model being injected with a backdoor after being trained unconsciously by victims using the poisoned dataset. In the field of text, however, existing works do not provide sufficient defense against backdoor attacks. In this paper, we propose a Noise-augmented Contrastive Learning (NCL) framework to defend against textual backdoor attacks when training models with untrustworthy data. With the aim of mitigating the mapping between triggers and the target label, we add appropriate noise perturbing possible backdoor triggers, augment the training dataset, and then pull homology samples in the feature space utilizing contrastive learning objective. Experiments demonstrate the effectiveness of our method in defending three types of textual backdoor attacks, outperforming the prior works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shengfang Zhai (13 papers)
  2. Qingni Shen (21 papers)
  3. Xiaoyi Chen (11 papers)
  4. Weilong Wang (13 papers)
  5. Cong Li (142 papers)
  6. Yuejian Fang (18 papers)
  7. Zhonghai Wu (29 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.