Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Kallima: A Clean-label Framework for Textual Backdoor Attacks (2206.01832v1)

Published 3 Jun 2022 in cs.CR and cs.CL

Abstract: Although Deep Neural Network (DNN) has led to unprecedented progress in various NLP tasks, research shows that deep models are extremely vulnerable to backdoor attacks. The existing backdoor attacks mainly inject a small number of poisoned samples into the training dataset with the labels changed to the target one. Such mislabeled samples would raise suspicion upon human inspection, potentially revealing the attack. To improve the stealthiness of textual backdoor attacks, we propose the first clean-label framework Kallima for synthesizing mimesis-style backdoor samples to develop insidious textual backdoor attacks. We modify inputs belonging to the target class with adversarial perturbations, making the model rely more on the backdoor trigger. Our framework is compatible with most existing backdoor triggers. The experimental results on three benchmark datasets demonstrate the effectiveness of the proposed method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiaoyi Chen (11 papers)
  2. Yinpeng Dong (102 papers)
  3. Zeyu Sun (33 papers)
  4. Shengfang Zhai (13 papers)
  5. Qingni Shen (21 papers)
  6. Zhonghai Wu (29 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.