Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 60 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 120 tok/s Pro
Kimi K2 211 tok/s Pro
GPT OSS 120B 416 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Optimization-centric cutting feedback for semiparametric models (2509.18708v1)

Published 23 Sep 2025 in stat.ME, math.ST, stat.CO, and stat.TH

Abstract: Modern statistics deals with complex models from which the joint model used for inference is built by coupling submodels, called modules. We consider modular inference where the modules may depend on parametric and nonparametric components. In such cases, a joint Bayesian inference is highly susceptible to misspecification across any module, and inappropriate priors for nonparametric components may deliver subpar inferences for parametric components, and vice versa. We propose a novel ``optimization-centric'' approach to cutting feedback for semiparametric modular inference, which can address misspecification and prior-data conflicts. The proposed generalized cut posteriors are defined through a variational optimization problem for generalized posteriors where regularization is based on R\'{e}nyi divergence, rather than Kullback-Leibler divergence (KLD), and variational computational methods are developed. We show empirically that using R\'{e}nyi divergence to define the cut posterior delivers more robust inferences than KLD. We derive novel posterior concentration results that accommodate the R\'{e}nyi divergence and allow for semiparametric components, greatly extending existing results for cut posteriors that were derived for parametric models and KLD. We demonstrate these new methods in a benchmark toy example and two real examples: Gaussian process adjustments for confounding in causal inference and misspecified copula models with nonparametric marginals.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube