Normalized Maximum Likelihood Distribution
- Normalized Maximum Likelihood (NML) is a universal coding method that defines probability via maximized likelihood and normalization (the Shtarkov sum), balancing model fit with intrinsic complexity.
- NML achieves minimax regret by normalizing over all possible data sequences, forming the basis for the MDL principle and providing robust, scale-invariant model selection.
- Extensions such as weighted NML, α-NML, and pNML broaden its applicability, particularly in predictive modeling and deep learning, by addressing computational challenges and enhancing inference accuracy.
The normalized maximum likelihood (NML) distribution is a foundational construct in universal coding, model selection, and statistical inference, providing a formal mechanism for balancing model fit against the intrinsic complexity of a model family. It uniquely achieves minimax regret in data compression and prediction and underpins the modern minimum description length (MDL) principle. The NML is defined via maximization and normalization over all possible data realizations, with the normalization constant—known as the Shtarkov sum—quantifying parametric complexity. In both discrete and continuous settings, NML tightly links information theory, statistics, and learning theory; it gives rise to practical inference criteria and inspires new developments through its predictive and generalized forms.
1. Formal Definition and Minimax Regret
Let denote a parametric model family on data . For an observed sample (or sequence) , the NML density is given by
with normalization
(Bickel, 2010, Suzuki et al., 2018, Suzuki et al., 2024).
The NML achieves the minimax regret criterion: ensuring that for any data sequence, the excess codelength over the best model in the class is bounded by , a data-independent constant. No explicit prior is assumed, and the optimal is precisely (Barron et al., 2014, Bondaschi et al., 2022).
2. Parametric Complexity and the Shtarkov Sum
The normalization —the Shtarkov sum—quantifies the capacity of the model class to fit all possible data sequences:
- In discrete models, is a finite sum over the countable sample space and can often be computed directly or approximated asymptotically (Boullé et al., 2016, Heck et al., 2014).
- In continuous models, is an integral that generally diverges unless the data domain or parameter range is restricted. The foundation for its rigorous computation in continuous spaces has been established using the coarea formula from geometric measure theory, which decomposes the space according to the MLE mapping and incorporates a Jacobian determinant as a correction (Suzuki et al., 2024). Specifically,
where is determined via Hausdorff measures on the level sets of the estimator.
In the MDL context, the total stochastic complexity of under NML splits as
with the first term assessing model fit and the second penalizing model complexity (Boullé et al., 2016, Suzuki et al., 2018).
3. Asymptotics and Analytic Calculation
In regular exponential families, the asymptotic behavior of is
where is the parameter dimension and the Fisher information (Bondaschi et al., 2022, Suzuki et al., 2018, Bickel, 2010). Recent advances have produced efficient, non-asymptotic formulas for in exponential families using Fourier analysis, which transforms the max-over- operation into a tractable integral (Suzuki et al., 2018). For discrete families such as Bernoulli or multinomial, the computation reduces to finite sums over data summaries (sufficient statistics); for mixtures and continuous exponential families, explicit reparametrization or compactification is generally necessary to guarantee finiteness (Hirai et al., 2012, Hirai et al., 2017, Suzuki et al., 2024).
4. Extensions: Weighted and Luckiness NML
Canonical NML is undefined for many unbounded or singular models (including most location families and overparameterized settings). Weighted NML (WNML) or “Luckiness NML” remedies this by introducing a weighting (“luckiness”) function in both numerator and denominator: (Bondaschi et al., 2022, Bibas et al., 2022, Bickel, 2010). This approach regularizes the effective model class (e.g., equivalent to ridge regression for linear models under luckiness), sidestepping the divergence of inherent in unconstrained continuous families.
The α-NML predictor further interpolates between Bayes-mix (average regret, α=1), NML (worst-case regret, α→∞), and LNML (weighted models), optimizing Rényi divergence of order α as the regret criterion (Bondaschi et al., 2022).
5. Predictive NML and Deep Learning
The predictive NML (pNML) is designed for supervised settings, defining for a new input (with training set ) the predictive distribution: where is the ML estimate on with appended (Bibas et al., 2019, Bibas et al., 2022). pNML achieves minimax pointwise regret for individual test samples, yielding strong guarantees on confidence and robustness.
In deep networks, retraining for all candidate labels is intractable. The Deep-pNML method approximates pNML by fine-tuning only the final classification layer for each label hypothesis, yielding robust calibration, improved OOD detection, and adversarial robustness—particularly when the regret metric spikes for low-confidence or adversarial examples (Bibas et al., 2019).
6. Model Selection and Practical Applications
NML and its variants define core model selection criteria within MDL. The MDL optimal model (e.g., the number of clusters in GMM) minimizes the NML code-length . For Gaussian mixtures, practical computation of the normalizer requires domain restriction (e.g., bounding the norm of means and eigenvalues of covariances), with well-defined upper bounds on NML code-length providing universal criteria that are invariant under scaling and robust to domain parametrization (Hirai et al., 2012, Hirai et al., 2017).
Further, discrimination information (DI) defined as the log-ratio of NML codes for competing hypotheses directly quantifies the strength of evidence, with strong asymptotic properties—unlike p-values, DI vanishes in probability under the null and requires neither priors nor averaging over hypothetical data (Bickel, 2010).
7. Bayesian Properties and Connections
Although NML is rooted in minimax regret and universal compression, it admits a Bayes-like mixture representation: where the weights may be positive or negative. For certain one-dimensional exponential families, positive weights are possible; generally, a signed prior is required. This representation clarifies the relationship between MDL and Bayesian inference and also provides computational benefits, enabling fast marginal and conditional calculation for coding and prediction (Barron et al., 2014). Asymptotically, NML and Bayesian mixtures with Jeffreys prior coincide, but for finite samples, differences persist, especially under sharp model constraints or order restrictions (Heck et al., 2014).
Table: Core NML Variants and Their Domains
| Variant | Definition/Formula | Typical Use |
|---|---|---|
| NML | Minimax regret, finite families | |
| LNML / WNML | Weighted MLEs; denominator uses weights or prior-like function | Divergent/overparametrized models |
| α-NML | Minimizes Rényi-regret; interpolates Bayes and NML | Trade-off avg/worst-case regret |
| pNML | Predictive/distribution-free learning |
References
- “Alpha-NML Universal Predictors” (Bondaschi et al., 2022)
- “Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks” (Bibas et al., 2019)
- “Foundation of Calculating Normalized Maximum Likelihood for Continuous Probability Models” (Suzuki et al., 2024)
- “Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation” (Barron et al., 2014)
- “Statistical inference optimized with respect to the observed sample for single or multiple comparisons” (Bickel, 2010)
- “Beyond Ridge Regression for Distribution-Free Data” (Bibas et al., 2022)
- “Normalized Maximum Likelihood Coding for Exponential Family with Its Applications to Optimal Clustering” (Hirai et al., 2012)
- “Upper Bound on Normalized Maximum Likelihood Codes for Gaussian Mixture Models” (Hirai et al., 2017)
- “Revisiting enumerative two-part crude MDL for Bernoulli and multinomial distributions” (Boullé et al., 2016)
- “Exact Calculation of Normalized Maximum Likelihood Code Length Using Fourier Analysis” (Suzuki et al., 2018)
- “Testing Order Constraints: Qualitative Differences Between Bayes Factors and Normalized Maximum Likelihood” (Heck et al., 2014)
The NML framework thus constitutes a rigorous, computationally tractable cornerstone for universal coding, statistical learning, and robust inference, with ongoing extensions informing predictive modeling, regularization, and deep learning applications.