Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 109 tok/s
Gemini 3.0 Pro 52 tok/s Pro
Gemini 2.5 Flash 159 tok/s Pro
Kimi K2 203 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Confidence regions for high-dimensional generalized linear models under sparsity (1610.01353v1)

Published 5 Oct 2016 in stat.ME, math.ST, and stat.TH

Abstract: We study asymptotically normal estimation and confidence regions for low-dimensional parameters in high-dimensional sparse models. Our approach is based on the $\ell_1$-penalized M-estimator which is used for construction of a bias corrected estimator. We show that the proposed estimator is asymptotically normal, under a sparsity assumption on the high-dimensional parameter, smoothness conditions on the expected loss and an entropy condition. This leads to uniformly valid confidence regions and hypothesis testing for low-dimensional parameters. The present approach is different in that it allows for treatment of loss functions that we not sufficiently differentiable, such as quantile loss, Huber loss or hinge loss functions. We also provide new results for estimation of the inverse Fisher information matrix, which is necessary for the construction of the proposed estimator. We formulate our results for general models under high-level conditions, but investigate these conditions in detail for generalized linear models and provide mild sufficient conditions. As particular examples, we investigate the case of quantile loss and Huber loss in linear regression and demonstrate the performance of the estimators in a simulation study and on real datasets from genome-wide association studies. We further investigate the case of logistic regression and illustrate the performance of the estimator on simulated and real data.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.