Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding the Risks and Rewards of Combining Unbiased and Possibly Biased Estimators, with Applications to Causal Inference (2205.10467v2)

Published 21 May 2022 in stat.ME

Abstract: Several problems in statistics involve the combination of high-variance unbiased estimators with low-variance estimators that are only unbiased under strong assumptions. A notable example is the estimation of causal effects while combining small experimental datasets with larger observational datasets. There exist a series of recent proposals on how to perform such a combination, even when the bias of the low-variance estimator is unknown. To build intuition for the differing trade-offs of competing approaches, we argue for examining the finite-sample estimation error of each approach as a function of the unknown bias. This includes understanding the bias threshold -- the largest bias for which a given approach improves over using the unbiased estimator alone. Though this lens, we review several recent proposals, and observe in simulation that different approaches exhibits qualitatively different behavior. We also introduce a simple alternative approach, which compares favorably in simulation to recent alternatives, having a higher bias threshold and generally making a more conservative trade-off between best-case performance (when the bias is zero) and worst-case performance (when the bias is adversarially chosen). More broadly, we prove that for any amount of (unknown) bias, the MSE of this estimator can be bounded in a transparent way that depends on the variance / covariance of the underlying estimators that are being combined.

Summary

We haven't generated a summary for this paper yet.