Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 168 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 181 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Real Log Canonical Thresholds at Non-singular Points (2408.13030v1)

Published 23 Aug 2024 in math.ST and stat.TH

Abstract: Recent advances have clarified theoretical learning accuracy in Bayesian inference, revealing that the asymptotic behavior of metrics such as generalization loss and free energy, assessing predictive accuracy, is dictated by a rational number unique to each statistical model, termed the learning coefficient (real log canonical threshold). For models meeting regularity conditions, their learning coefficients are known. However, for singular models not meeting these conditions, exact values of learning coefficients are provided for specific models like reduced-rank regression, but a broadly applicable calculation method for these learning coefficients in singular models remains elusive. This paper extends the application range of the previous work and provides an approach that can be applied to many points within the set of realizable parameters. Specifically, it provides a formula for calculating the real log canonical threshold at many non-singular points within the set of realizable parameters. If this calculation can be performed, it is possible to obtain an upper bound for the learning coefficient of the statistical model. Thus, this approach can also be used to easily obtain an upper bound for the learning coefficients of statistical models. As an application example, it provides an upper bound for the learning coefficient of a mixed binomial model, and calculates the learning coefficient for a specific case of reduced-rank regression, confirming that the results are consistent with previous research.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: