Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is the Number of Trainable Parameters All That Actually Matters? (2109.11928v1)

Published 24 Sep 2021 in stat.ML and cs.LG

Abstract: Recent work has identified simple empirical scaling laws for LLMs, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively cheat scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by doping the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the actual number of trainable parameters; scaling laws cannot be deceived by spurious parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Amélie Chatelain (7 papers)
  2. Amine Djeghri (2 papers)
  3. Daniel Hesslow (12 papers)
  4. Julien Launay (17 papers)
  5. Iacopo Poli (18 papers)
Citations (6)