Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Asymptotically Optimal Bias Reduction for Parametric Models (2002.08757v1)

Published 19 Feb 2020 in math.ST, stat.CO, stat.ME, and stat.TH

Abstract: An important challenge in statistical analysis concerns the control of the finite sample bias of estimators. This problem is magnified in high-dimensional settings where the number of variables $p$ diverges with the sample size $n$, as well as for nonlinear models and/or models with discrete data. For these complex settings, we propose to use a general simulation-based approach and show that the resulting estimator has a bias of order $\mathcal{O}(0)$, hence providing an asymptotically optimal bias reduction. It is based on an initial estimator that can be slightly asymptotically biased, making the approach very generally applicable. This is particularly relevant when classical estimators, such as the maximum likelihood estimator, can only be (numerically) approximated. We show that the iterative bootstrap of Kuk (1995) provides a computationally efficient approach to compute this bias reduced estimator. We illustrate our theoretical results in simulation studies for which we develop new bias reduced estimators for the logistic regression, with and without random effects. These estimators enjoy additional properties such as robustness to data contamination and to the problem of separability.

Summary

We haven't generated a summary for this paper yet.