Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UDApter -- Efficient Domain Adaptation Using Adapters (2302.03194v2)

Published 7 Feb 2023 in cs.CL

Abstract: We propose two methods to make unsupervised domain adaptation (UDA) more parameter efficient using adapters, small bottleneck layers interspersed with every layer of the large-scale pre-trained LLM (PLM). The first method deconstructs UDA into a two-step process: first by adding a domain adapter to learn domain-invariant information and then by adding a task adapter that uses domain-invariant information to learn task representations in the source domain. The second method jointly learns a supervised classifier while reducing the divergence measure. Compared to strong baselines, our simple methods perform well in natural language inference (MNLI) and the cross-domain sentiment classification task. We even outperform unsupervised domain adaptation methods such as DANN and DSN in sentiment classification, and we are within 0.85% F1 for natural language inference task, by fine-tuning only a fraction of the full model parameters. We release our code at https://github.com/declare-lab/domadapter

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bhavitvya Malik (4 papers)
  2. Abhinav Ramesh Kashyap (13 papers)
  3. Min-Yen Kan (92 papers)
  4. Soujanya Poria (138 papers)
Citations (13)
Github Logo Streamline Icon: https://streamlinehq.com