Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parameter-Efficient Transfer Learning for NLP (1902.00751v2)

Published 2 Feb 2019 in cs.LG, cs.CL, and stat.ML

Abstract: Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.

Parameter-Efficient Transfer Learning for NLP

The paper "Parameter-Efficient Transfer Learning for NLP" by Neil Houlsby et al. addresses the challenges and inefficiencies associated with fine-tuning large pre-trained models for multiple NLP tasks. Specifically, the paper proposes the utilization of adapter modules to achieve a more parameter-effective transfer mechanism.

Abstract

Fine-tuning large pre-trained models such as BERT has proven to be highly effective in achieving state-of-the-art performance on a plethora of NLP tasks. However, this method is inherently parameter inefficient, requiring an entirely new model for each downstream task. The paper introduces the concept of adapter modules, which add only a minimal number of trainable parameters per task while keeping the original network parameters unchanged. This method allows for a high degree of parameter sharing and demonstrates that adapter-based tuning can achieve near state-of-the-art performance with significantly fewer additional parameters.

Main Contributions

  1. Adapter Modules: The key innovation of this paper is the introduction of adapter modules for transfer learning. These modules are designed to be inserted between the layers of a pre-trained model like BERT, requiring the tuning of only a small set of new parameters per task.
  2. Extensive Evaluation: The effectiveness of adapter modules is demonstrated on 26 diverse text classification tasks, including the GLUE benchmark. The paper reports that adapters achieve performance within 0.4% of full fine-tuning while adding only 3.6% of the task-specific parameters required by traditional fine-tuning methods.
  3. High Parameter Efficiency: The paper presents rigorous experiments showing that adapter-based tuning requires two orders of magnitude fewer parameters compared to full fine-tuning. For instance, in the GLUE benchmark, the fully fine-tuned BERT requires models that are 9 times the parameters of the base model, whereas adapter-based models require only 1.3 times the parameters.

Methodology

The proposed method, adapter tuning, involves the following key aspects:

  • Adapter Architecture: The adapters are bottleneck layers added between the layers of a pre-trained model. Each adapter projects the high-dimensional input features into a lower dimension (bottleneck) before mapping them back to the original size. This design ensures that the adapters are compact and add a negligible increase in parameters.
  • Fixed Parameters: During training, only the parameters of the adapter modules are updated, while the original network parameters remain frozen. This allows for easy extension to new tasks without impacting previously learned tasks.

Experimental Results

  • GLUE Benchmark: In the GLUE benchmark, adapter-based models achieved a mean score of 80.0 compared to 80.4 for fully fine-tuned models, while only adding 3.6% of the additional parameters per task.
  • Additional Tasks: The researchers evaluated the method on 17 additional publicly available text classification datasets. The results show that adapter modules maintain a strong performance on average while requiring significantly fewer parameters.
  • Analysis of Parameter Efficiency: The paper includes an analysis of the trade-off between the number of trained parameters and performance. Adapters with sizes ranging from 0.5% to 5% of the original model's parameters achieved performance levels close to the fully fine-tuned BERT models.

Implications and Future Directions

The implications of this research are substantial for both practical applications and theoretical advancements in NLP. The practical benefits include significant savings in computational resources and storage, particularly beneficial in environments like cloud-based services where multiple models must be maintained for different tasks.

Theoretically, the concept of adapter modules opens new avenues for research in parameter-efficient learning and model scalability. Future research can explore optimizing adapter architectures further, extending this method to other types of models, and understanding the interplay of adapter modules with different layers within large pre-trained models.

Conclusion

The paper "Parameter-Efficient Transfer Learning for NLP" presents a compelling alternative to the full fine-tuning approach by introducing adapter modules. The proposed method significantly reduces the number of additional parameters required per task while maintaining high performance levels across a variety of NLP tasks. This approach provides a scalable and efficient solution to the challenges of transfer learning in NLP, setting the stage for further research and practical implementations in more parameter-efficient AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Neil Houlsby (62 papers)
  2. Andrei Giurgiu (3 papers)
  3. Bruna Morrone (1 paper)
  4. Andrea Gesmundo (20 papers)
  5. Mona Attariyan (1 paper)
  6. Sylvain Gelly (43 papers)
  7. Quentin De Laroussilhe (3 papers)
  8. Stanislaw Jastrzebski (7 papers)
Citations (3,699)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com