Dice Question Streamline Icon: https://streamlinehq.com

Scale dependence of algorithmic progress parameters

Investigate whether algorithmic progress in the relation C_m = k log F_m + b is scale-dependent by determining if the slope parameter k varies across training compute regimes, and quantify how compute-efficiency reductions differ at different FLOP scales (e.g., 10^21 versus 10^25 FLOP).

Information Square Streamline Icon: https://streamlinehq.com

Background

To estimate algorithmic progress, the paper models capability C_m as a log-linear function of training compute F_m with parameters k (compute scaling slope) and b (algorithmic quality offset). The primary analysis assumes progress occurs via increases in b over time with a fixed k.

The authors caution that algorithmic progress could also change k, implying scale dependence (e.g., larger gains at higher compute scales). They explicitly note insufficient data to test this, leaving a gap in understanding whether efficiency improvements vary by compute scale—an issue with practical implications for forecasting and policy.

References

Most crucially, algorithmic progress could also involve changing k, such that the rate of algorithmic progress might depend on the specific scale of compute under question -- for example, the rate of reduction in compute requirements might be faster at 10{25} FLOP compared to 10{21} FLOP. Unfortunately, we do not have sufficient data to test this scale-dependence of algorithmic progress in detail, so for the purposes of this paper we present our results assuming scale-independence.

A Rosetta Stone for AI Benchmarks (2512.00193 - Ho et al., 28 Nov 2025) in Section 4.2 (Algorithmic progress)