Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gradient-Mask Tuning Elevates the Upper Limits of LLM Performance (2406.15330v1)

Published 21 Jun 2024 in cs.AI and cs.CL

Abstract: LLMs have revolutionized lots of fields of research. Although it is well-known that fine-tuning is essential for enhancing the capabilities of LLMs, existing research suggests that there is potential redundancy in the fine-tuning process and therefore proposes to update only a subset of parameters. However, these methods fail to leverage the task-specific information to identify important parameters during training. Based on the insight that gradients inherently contain information on task-specific data, we propose Gradient-Mask Tuning (GMT), a method that selectively updates parameters during training based on their gradient information. Specifically, we compute the absolute values of the gradients and apply masking to those with relatively smaller magnitudes. Our empirical results across various tasks demonstrate that GMT not only outperforms traditional fine-tuning methods but also elevates the upper limits of LLM performance. Further analysis indicates that GMT exhibits insensitivity to mask ratio and possesses computational efficiency comparable to vanilla SFT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haoling Li (13 papers)
  2. Xin Zhang (904 papers)
  3. Xiao Liu (402 papers)
  4. Yeyun Gong (78 papers)
  5. Yifan Wang (319 papers)
  6. Yujiu Yang (155 papers)
  7. Qi Chen (194 papers)
  8. Peng Cheng (229 papers)

Summary

We haven't generated a summary for this paper yet.