Papers
Topics
Authors
Recent
2000 character limit reached

Debiased Prediction Inference with Non-sparse Loadings in Misspecified High-dimensional Regression Models

Published 15 Jul 2025 in math.ST and stat.TH | (2507.10944v1)

Abstract: High-dimensional regression models with regularized sparse estimation are widely applied. For statistical inferences, debiased methods are available about single coefficients or predictions with sparse new covariate vectors (also called loadings), in the presence of possible model misspecification. However, statistical inferences about predictions with non-sparse loadings are studied only under the assumption of correctly specified models. In this work, we develop debiased estimation and associated Wald confidence intervals for predictions with general loadings, allowed to be non-sparse, from possibly misspecified high-dimensional regression models. Our debiased estimator involves estimation of a debiasing vector, which is the general loading left-multiplied by the non-centered precision matrix in the linear model (LM) setting or the inverse Hessian of the objective function at the target coefficient vector in the generalized linear model (GLM) setting. We propose suitable estimators of the precision matrix or the inverse Hessian respectively in the LM or GLM settings and, for the first time, establish a root-n asymptotic expansion for the debiased prediction and justify associated Wald confidence intervals under sparsity conditions on the precision matrix or the inverse Hessian which are comparable to the conjunction of sparsity conditions required for inferences about all single coefficients in existing works. We also provide numerical results which further demonstrate the validity of our proposed confidence intervals for predictions with general loadings from possibly misspecified regression models.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.