Explaining Practical Differences Between Treatment Effect Estimators with High Dimensional Asymptotics (2203.12538v2)
Abstract: We revisit the classical causal inference problem of estimating the average treatment effect in the presence of fully observed confounding variables using two-stage semiparametric methods. In existing theoretical studies of methods such as G-computation, inverse propensity weighting (IPW), and two common doubly robust estimators -- augmented IPW (AIPW) and targeted maximum likelihood estimation (TMLE) -- they are either bias-dominated, or have similar asymptotic statistical properties. However, when applied to real datasets, they often appear to have notably different variance. We compare these methods when using a ML model to estimate the nuisance parameters of the semiparametric model, and highlight some of the important differences. When the outcome model estimates have little bias, which is common among some key ML models, G-computation and the TMLE outperforms the other estimators in both bias and variance. We show that the differences can be explained using high-dimensional statistical theory, where the number of confounders $d$ is of the same order as the sample size $n$. To make this theoretical problem tractable, we posit a generalized linear model for the effect of the confounders on the treatment assignment and outcomes. Despite making parametric assumptions, this setting is a useful surrogate for some machine learning methods used to adjust for confounding in two-stage semiparametric methods. In particular, the estimation of the first stage adds variance that does not vanish, forcing us to confront terms in the asymptotic expansion that normally are brushed aside as finite sample defects. However, our model emphasizes differences in performance between these estimators beyond first-order asymptotics.