Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively (2211.01642v1)

Published 3 Nov 2022 in cs.CL and cs.AI

Abstract: Large-scale pre-trained LLMs have achieved impressive results on a wide range of downstream tasks recently. However, fine-tuning an extremely large-scale pre-trained LLM on limited target datasets is often plagued by overfitting and representation degradation. In this paper, we propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning, which adaptively selects a more promising subnetwork to perform staging updates based on gradients of back-propagation. Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability, and consistently achieves better results with variable pre-trained LLMs. In addition, DPS brings a large magnitude of improvement in out-of-domain transferring experiments and low-resource scenarios, which shows that it can maintain stable general contextual features and reduce the representation collapse. We release our code at https://github.com/ZhangHaojie077/DPS

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haojie Zhang (21 papers)
  2. Ge Li (213 papers)
  3. Jia Li (380 papers)
  4. Zhongjin Zhang (6 papers)
  5. Yuqi Zhu (25 papers)
  6. Zhi Jin (160 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub