Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 66 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

A simple data-driven method to optimise the penalty strengths of penalised models and its application to non-parametric smoothing (2206.04067v1)

Published 8 Jun 2022 in stat.ME and astro-ph.GA

Abstract: Information of interest can often only be extracted from data by model fitting. When the functional form of such a model can not be deduced from first principles, one has to make a choice between different possible models. A common approach in such cases is to minimise the information loss in the model by trying to reduce the number of fit variables (or the model flexibility, respectively) as much as possible while still yielding an acceptable fit to the data. Model selection via the Akaike Information Criterion (AIC) provides such an implementation of Occam's razor. We argue that the same principles can be applied to optimise the penalty-strength of a penalised maximum-likelihood model. However, while in typical applications AIC is used to choose from a finite, discrete set of maximum-likelihood models the penalty optimisation requires to select out of a continuum of candidate models and these models violate the maximum-likelihood condition. We derive a generalised information criterion AICp that encompasses this case. It naturally involves the concept of effective free parameters which is very flexible and can be applied to any model, be it linear or non-linear, parametric or non-parametric, and with or without constraint equations on the parameters. We show that the generalised AICp allows an optimisation of any penalty-strength without the need of separate Monte-Carlo simulations. As an example application, we discuss the optimisation of the smoothing in non-parametric models which has many applications in astrophysics, like in dynamical modeling, spectral fitting or gravitational lensing.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.