Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks (2205.00305v4)

Published 30 Apr 2022 in cs.CL

Abstract: Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive experiments are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chin-Lun Fu (3 papers)
  2. Zih-Ching Chen (11 papers)
  3. Yun-Ru Lee (1 paper)
  4. Hung-yi Lee (327 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.