Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sensitivity Analysis for Unmeasured Confounding via Effect Extrapolation (2102.01935v1)

Published 3 Feb 2021 in stat.ME and stat.AP

Abstract: Inferring the causal effect of a non-randomly assigned exposure on an outcome requires adjusting for common causes of the exposure and outcome to avoid biased conclusions. Notwithstanding the efforts investigators routinely make to measure and adjust for such common causes (or confounders), some confounders typically remain unmeasured, raising the prospect of biased inference in observational studies. Therefore, it is crucial that investigators can practically assess their substantive conclusions' relative (in)sensitivity to potential unmeasured confounding. In this article, we propose a sensitivity analysis strategy that is informed by the stability of the exposure effect over different, well-chosen subsets of the measured confounders. The proposal entails first approximating the process for recording confounders to learn about how the effect is potentially affected by varying amounts of unmeasured confounding, then extrapolating to the effect had hypothetical unmeasured confounders been additionally adjusted for. A large set of measured confounders can thus be exploited to provide insight into the likely presence of unmeasured confounding bias, albeit under an assumption about how data on the confounders are recorded. The proposal's ability to reveal the true effect and ensure valid inference after extrapolation is empirically compared with existing methods using simulation studies. We demonstrate the procedure using two different publicly available datasets commonly used for causal inference.

Summary

We haven't generated a summary for this paper yet.