Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (2205.12410v2)

Published 24 May 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Standard fine-tuning of large pre-trained LLMs (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules -- given the underlying PEFT method of choice -- introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture of low rank decomposition matrices like LoRA to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yaqing Wang (59 papers)
  2. Sahaj Agarwal (6 papers)
  3. Subhabrata Mukherjee (59 papers)
  4. Xiaodong Liu (162 papers)
  5. Jing Gao (98 papers)
  6. Ahmed Hassan Awadallah (50 papers)
  7. Jianfeng Gao (344 papers)
Citations (99)

Summary

We haven't generated a summary for this paper yet.