Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability (2308.15419v2)
Abstract: How do LLMs learn to make predictions during pre-training? To study this, we extract learning curves from five autoregressive English LLM pre-training runs, for 1M unseen tokens in context. We observe that the LLMs generate short repetitive phrases before learning to generate longer and more coherent text. We also find that individual tokens often exhibit sudden increases or decreases in loss that are surprisingly consistent across pre-training runs. To better understand these fluctuations, we quantify the final surprisal, within-run variability, age of acquisition, forgettability, and cross-run variability of learning curves for individual tokens in context. More frequent tokens reach lower final surprisals, exhibit less variability within and across pre-training runs, are learned earlier, and are less likely to be "forgotten" during pre-training. Higher n-gram probabilities further accentuate these effects. Independent of the target token, shorter and more frequent contexts correlate with marginally more stable and quickly acquired predictions. Based on our results, we argue for the existence of sequential learning dependencies between different model capabilities, and we characterize LLM learning as early n-gram learning before gradual refinement of tail n-gram predictions.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.