Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention (2311.17400v2)

Published 29 Nov 2023 in cs.CL, cs.CR, and cs.LG

Abstract: Transformer-based models, such as BERT and GPT, have been widely adopted in NLP due to their exceptional performance. However, recent studies show their vulnerability to textual adversarial attacks where the model's output can be misled by intentionally manipulating the text inputs. Despite various methods that have been proposed to enhance the model's robustness and mitigate this vulnerability, many require heavy consumption resources (e.g., adversarial training) or only provide limited protection (e.g., defensive dropout). In this paper, we propose a novel method called dynamic attention, tailored for the transformer architecture, to enhance the inherent robustness of the model itself against various adversarial attacks. Our method requires no downstream task knowledge and does not incur additional costs. The proposed dynamic attention consists of two modules: (I) attention rectification, which masks or weakens the attention value of the chosen tokens, and (ii) dynamic modeling, which dynamically builds the set of candidate tokens. Extensive experiments demonstrate that dynamic attention significantly mitigates the impact of adversarial attacks, improving up to 33\% better performance than previous methods against widely-used adversarial attacks. The model-level design of dynamic attention enables it to be easily combined with other defense methods (e.g., adversarial training) to further enhance the model's robustness. Furthermore, we demonstrate that dynamic attention preserves the state-of-the-art robustness space of the original model compared to other dynamic modeling methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Lujia Shen (3 papers)
  2. Yuwen Pu (17 papers)
  3. Shouling Ji (136 papers)
  4. Changjiang Li (22 papers)
  5. Xuhong Zhang (61 papers)
  6. Chunpeng Ge (8 papers)
  7. Ting Wang (213 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.