Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better (2202.12024v2)

Published 24 Feb 2022 in cs.CL

Abstract: Effectively finetuning pretrained LLMs (PLMs) is critical for their success in downstream tasks. However, PLMs may have risks in overfitting the pretraining tasks and data, which usually have gap with the target downstream tasks. Such gap may be difficult for existing PLM finetuning methods to overcome and lead to suboptimal performance. In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before fine-tuning. More specifically, we propose a matrix-wise perturbing method which adds different uniform noises to different parameter matrices based on their standard deviations. In this way, the varied characteristics of different types of parameters in PLMs can be considered. Extensive experiments on both GLUE English benchmark and XTREME multilingual benchmark show NoisyTune can consistently empower the finetuning of different PLMs on different downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chuhan Wu (87 papers)
  2. Fangzhao Wu (81 papers)
  3. Tao Qi (43 papers)
  4. Yongfeng Huang (110 papers)
  5. Xing Xie (220 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.