Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning (2109.05687v1)

Published 13 Sep 2021 in cs.CL and cs.AI

Abstract: Recent pretrained LLMs extend from millions to billions of parameters. Thus the need to fine-tune an extremely large pretrained model with a limited training corpus arises in various downstream tasks. In this paper, we propose a straightforward yet effective fine-tuning technique, Child-Tuning, which updates a subset of parameters (called child network) of large pretrained models via strategically masking out the gradients of the non-child network during the backward process. Experiments on various downstream tasks in GLUE benchmark show that Child-Tuning consistently outperforms the vanilla fine-tuning by 1.5~8.6 average score among four different pretrained models, and surpasses the prior fine-tuning techniques by 0.6~1.3 points. Furthermore, empirical results on domain transfer and task transfer show that Child-Tuning can obtain better generalization performance by large margins.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Runxin Xu (30 papers)
  2. Fuli Luo (23 papers)
  3. Zhiyuan Zhang (129 papers)
  4. Chuanqi Tan (56 papers)
  5. Baobao Chang (80 papers)
  6. Songfang Huang (51 papers)
  7. Fei Huang (409 papers)
Citations (158)
X Twitter Logo Streamline Icon: https://streamlinehq.com