Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MSDT: Masked Language Model Scoring Defense in Text Domain (2211.05371v1)

Published 10 Nov 2022 in cs.CL

Abstract: Pre-trained LLMs allowed us to process downstream tasks with the help of fine-tuning, which aids the model to achieve fairly high accuracy in various NLP tasks. Such easily-downloaded LLMs from various websites empowered the public users as well as some major institutions to give a momentum to their real-life application. However, it was recently proven that models become extremely vulnerable when they are backdoor attacked with trigger-inserted poisoned datasets by malicious users. The attackers then redistribute the victim models to the public to attract other users to use them, where the models tend to misclassify when certain triggers are detected within the training sample. In this paper, we will introduce a novel improved textual backdoor defense method, named MSDT, that outperforms the current existing defensive algorithms in specific datasets. The experimental results illustrate that our method can be effective and constructive in terms of defending against backdoor attack in text domain. Code is available at https://github.com/jcroh0508/MSDT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jaechul Roh (11 papers)
  2. Minhao Cheng (43 papers)
  3. Yajun Fang (4 papers)
Citations (1)