Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tabdoor: Backdoor Vulnerabilities in Transformer-based Neural Networks for Tabular Data (2311.07550v3)

Published 13 Nov 2023 in cs.CR and cs.LG

Abstract: Deep Neural Networks (DNNs) have shown great promise in various domains. Alongside these developments, vulnerabilities associated with DNN training, such as backdoor attacks, are a significant concern. These attacks involve the subtle insertion of triggers during model training, allowing for manipulated predictions. More recently, DNNs for tabular data have gained increasing attention due to the rise of transformer models. Our research presents a comprehensive analysis of backdoor attacks on tabular data using DNNs, mainly focusing on transformers. We also propose a novel approach for trigger construction: an in-bounds attack, which provides excellent attack performance while maintaining stealthiness. Through systematic experimentation across benchmark datasets, we uncover that transformer-based DNNs for tabular data are highly susceptible to backdoor attacks, even with minimal feature value alterations. We also verify that our attack can be generalized to other models, like XGBoost and DeepFM. Our results demonstrate up to 100% attack success rate with negligible clean accuracy drop. Furthermore, we evaluate several defenses against these attacks, identifying Spectral Signatures as the most effective. Nevertheless, our findings highlight the need to develop tabular data-specific countermeasures to defend against backdoor attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bart Pleiter (1 paper)
  2. Behrad Tajalli (3 papers)
  3. Stefanos Koffas (18 papers)
  4. Gorka Abad (10 papers)
  5. Jing Xu (244 papers)
  6. Martha Larson (38 papers)
  7. Stjepan Picek (68 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com