Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lifelong Language Pretraining with Distribution-Specialized Experts (2305.12281v1)

Published 20 May 2023 in cs.CL and cs.LG

Abstract: Pretraining on a large-scale corpus has become a standard method to build general LLMs (LMs). Adapting a model to new data distributions targeting different downstream tasks poses significant challenges. Naive fine-tuning may incur catastrophic forgetting when the over-parameterized LMs overfit the new data but fail to preserve the pretrained features. Lifelong learning (LLL) aims to enable information systems to learn from a continuous data stream across time. However, most prior work modifies the training recipe assuming a static fixed network architecture. We find that additional model capacity and proper regularization are key elements to achieving strong LLL performance. Thus, we propose Lifelong-MoE, an extensible MoE (Mixture-of-Experts) architecture that dynamically adds model capacity via adding experts with regularized pretraining. Our results show that by only introducing a limited number of extra experts while keeping the computation cost constant, our model can steadily adapt to data distribution shifts while preserving the previous knowledge. Compared to existing lifelong learning approaches, Lifelong-MoE achieves better few-shot performance on 19 downstream NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wuyang Chen (32 papers)
  2. Yanqi Zhou (30 papers)
  3. Nan Du (66 papers)
  4. Yanping Huang (40 papers)
  5. James Laudon (13 papers)
  6. Zhifeng Chen (65 papers)
  7. Claire Cu (1 paper)
Citations (34)

Summary

We haven't generated a summary for this paper yet.