Papers
Topics
Authors
Recent
Search
2000 character limit reached

Empirical bias-reducing adjustments to estimating functions

Published 11 Jan 2020 in stat.ME, math.ST, and stat.TH | (2001.03786v4)

Abstract: We develop a novel and general framework for reduced-bias $M$-estimation from asymptotically unbiased estimating functions. The framework relies on an empirical approximation of the bias by a function of derivatives of estimating function contributions. Reduced-bias $M$-estimation operates either implicitly, by solving empirically-adjusted estimating equations, or explicitly, by subtracting the estimated bias from the original $M$-estimates, and applies to models that are partially- or fully-specified, with either likelihoods or other surrogate objectives. Automatic differentiation can be used to abstract away the only algebra required to implement reduced-bias $M$-estimation. As a result, the bias reduction methods we introduce have markedly broader applicability with more straightforward implementation and less algebraic or computational effort than other established bias-reduction methods that require resampling or evaluation of expectations of products of log-likelihood derivatives. If $M$-estimation is by maximizing an objective, then there always exists a bias-reducing penalized objective. That penalized objective relates closely to information criteria for model selection, and can be further enhanced with plug-in penalties to deliver reduced-bias $M$-estimates with extra properties, like finiteness in models for categorical data. The reduced-bias $M$-estimators have the same asymptotic distribution as the original $M$-estimators, and, hence, standard procedures for inference and model selection apply unaltered with the improved estimates. We demonstrate and assess the properties of reduced-bias $M$-estimation in well-used, prominent modelling settings of varying complexity.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.