Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models (2210.09545v1)

Published 18 Oct 2022 in cs.CL, cs.CR, and cs.LG

Abstract: Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In NLP, DNNs are often backdoored during the fine-tuning process of a large-scale Pre-trained LLM (PLM) with poisoned samples. Although the clean weights of PLMs are readily available, existing methods have ignored this information in defending NLP models against backdoor attacks. In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned LLMs. Specifically, we leverage the clean pre-trained weights via two complementary techniques: (1) a two-step Fine-mixing technique, which first mixes the backdoored weights (fine-tuned on poisoned data) with the pre-trained weights, then fine-tunes the mixed weights on a small subset of clean data; (2) an Embedding Purification (E-PUR) technique, which mitigates potential backdoors existing in the word embeddings. We compare Fine-mixing with typical backdoor mitigation methods on three single-sentence sentiment classification tasks and two sentence-pair classification tasks and show that it outperforms the baselines by a considerable margin in all scenarios. We also show that our E-PUR method can benefit existing mitigation methods. Our work establishes a simple but strong baseline defense for secure fine-tuned NLP models against backdoor attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhiyuan Zhang (129 papers)
  2. Lingjuan Lyu (131 papers)
  3. Xingjun Ma (114 papers)
  4. Chenguang Wang (59 papers)
  5. Xu Sun (194 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com