Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Pre-Training of Large Language Models: How to (re)warm your model? (2308.04014v2)

Published 8 Aug 2023 in cs.CL and cs.LG

Abstract: LLMs are routinely pre-trained on billions of tokens, only to restart the process over again once new data becomes available. A much cheaper and more efficient solution would be to enable the continual pre-training of these models, i.e. updating pre-trained models with new data instead of re-training them from scratch. However, the distribution shift induced by novel data typically results in degraded performance on past data. Taking a step towards efficient continual pre-training, in this work, we examine the effect of different warm-up strategies. Our hypothesis is that the learning rate must be re-increased to improve compute efficiency when training on a new dataset. We study the warmup phase of models pre-trained on the Pile (upstream data, 300B tokens) as we continue to pre-train on SlimPajama (downstream data, 297B tokens), following a linear warmup and cosine decay schedule. We conduct all experiments on the Pythia 410M LLM architecture and evaluate performance through validation perplexity. We experiment with different pre-training checkpoints, various maximum learning rates, and various warmup lengths. Our results show that while rewarming models first increases the loss on upstream and downstream data, in the longer run it improves the downstream performance, outperforming models trained from scratch$\unicode{x2013}$even for a large downstream dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Kshitij Gupta (19 papers)
  2. Benjamin Thérien (12 papers)
  3. Adam Ibrahim (12 papers)
  4. Mats L. Richter (11 papers)
  5. Quentin Anthony (25 papers)
  6. Eugene Belilovsky (68 papers)
  7. Irina Rish (85 papers)
  8. Timothée Lesort (26 papers)
Citations (72)

Summary

We haven't generated a summary for this paper yet.