Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting a Language Model While Preserving its General Knowledge (2301.08986v1)

Published 21 Jan 2023 in cs.CL, cs.AI, cs.LG, and cs.NE

Abstract: Domain-adaptive pre-training (or DA-training for short), also known as post-training, aims to train a pre-trained general-purpose LLM (LM) using an unlabeled corpus of a particular domain to adapt the LM so that end-tasks in the domain can give improved performances. However, existing DA-training methods are in some sense blind as they do not explicitly identify what knowledge in the LM should be preserved and what should be changed by the domain corpus. This paper shows that the existing methods are suboptimal and proposes a novel method to perform a more informed adaptation of the knowledge in the LM by (1) soft-masking the attention heads based on their importance to best preserve the general knowledge in the LM and (2) contrasting the representations of the general and the full (both general and domain knowledge) to learn an integrated representation with both general and domain-specific knowledge. Experimental results will demonstrate the effectiveness of the proposed approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zixuan Ke (26 papers)
  2. Yijia Shao (18 papers)
  3. Haowei Lin (21 papers)
  4. Hu Xu (87 papers)
  5. Lei Shu (82 papers)
  6. Bing Liu (212 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com