Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Pre-trained Language Model Fine-tuning with Noise Stability Regularization (2206.05658v2)

Published 12 Jun 2022 in cs.CL

Abstract: The advent of large-scale pre-trained LLMs has contributed greatly to the recent progress in natural language processing. Many state-of-the-art LLMs are first trained on a large text corpus and then fine-tuned on downstream tasks. Despite its recent success and wide adoption, fine-tuning a pre-trained LLM often suffers from overfitting, which leads to poor generalizability due to the extremely high complexity of the model and the limited training samples from downstream tasks. To address this problem, we propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR). Specifically, we propose to inject the standard Gaussian noise or In-manifold noise and regularize hidden representations of the fine-tuned model. We first provide theoretical analyses to support the efficacy of our method. We then demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART. While these previous works only verify the effectiveness of their methods on relatively simple text classification tasks, we also verify the effectiveness of our method on question answering tasks, where the target problem is much more difficult and more training examples are available. Furthermore, extensive experimental results indicate that the proposed algorithm can not only enhance the in-domain performance of the LLMs but also improve the domain generalization performance on out-of-domain data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hang Hua (20 papers)
  2. Xingjian Li (49 papers)
  3. Dejing Dou (112 papers)
  4. Cheng-Zhong Xu (45 papers)
  5. Jiebo Luo (355 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.