Training Dynamics Underlying Language Model Scaling Laws: Loss Deceleration and Zero-Sum Learning
Abstract: This work aims to understand how scaling improves LLMs, specifically in terms of training dynamics. We find that LLMs undergo loss deceleration early in training; an abrupt slowdown in the rate of loss improvement, resulting in piecewise linear behaviour of the loss curve in log-log space. Scaling up the model mitigates this transition by (1) decreasing the loss at which deceleration occurs, and (2) improving the log-log rate of loss improvement after deceleration. We attribute loss deceleration to a type of degenerate training dynamics we term zero-sum learning (ZSL). In ZSL, per-example gradients become systematically opposed, leading to destructive interference in per-example changes in loss. As a result, improving loss on one subset of examples degrades it on another, bottlenecking overall progress. Loss deceleration and ZSL provide new insights into the training dynamics underlying LLM scaling laws, and could potentially be targeted directly to improve LLMs independent of scale. We make our code and artefacts available at: https://github.com/mirandrom/zsl
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.