Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Information-based inference for singular models and finite sample sizes: A frequentist information criterion (1506.05855v5)

Published 19 Jun 2015 in stat.ML, cs.LG, and physics.data-an

Abstract: In the information-based paradigm of inference, model selection is performed by selecting the candidate model with the best estimated predictive performance. The success of this approach depends on the accuracy of the estimate of the predictive complexity. In the large-sample-size limit of a regular model, the predictive performance is well estimated by the Akaike Information Criterion (AIC). However, this approximation can either significantly under or over-estimating the complexity in a wide range of important applications where models are either non-regular or finite-sample-size corrections are significant. We introduce an improved approximation for the complexity that is used to define a new information criterion: the Frequentist Information Criterion (QIC). QIC extends the applicability of information-based inference to the finite-sample-size regime of regular models and to singular models. We demonstrate the power and the comparative advantage of QIC in a number of example analyses.

Citations (4)

Summary

We haven't generated a summary for this paper yet.