Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models (2109.05793v1)

Published 13 Sep 2021 in cs.CL

Abstract: Recent works have shown that powerful pre-trained LLMs (PLM) can be fooled by small perturbations or intentional attacks. To solve this issue, various data augmentation techniques are proposed to improve the robustness of PLMs. However, it is still challenging to augment semantically relevant examples with sufficient diversity. In this work, we present Virtual Data Augmentation (VDA), a general framework for robustly fine-tuning PLMs. Based on the original token embeddings, we construct a multinomial mixture for augmenting virtual data embeddings, where a masked LLM guarantees the semantic relevance and the Gaussian noise provides the augmentation diversity. Furthermore, a regularized training strategy is proposed to balance the two aspects. Extensive experiments on six datasets show that our approach is able to improve the robustness of PLMs and alleviate the performance degradation under adversarial attacks. Our codes and data are publicly available at \textcolor{blue}{\url{https://github.com/RUCAIBox/VDA}}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kun Zhou (217 papers)
  2. Wayne Xin Zhao (196 papers)
  3. Sirui Wang (31 papers)
  4. Fuzheng Zhang (60 papers)
  5. Wei Wu (481 papers)
  6. Ji-Rong Wen (299 papers)
Citations (7)