Covariate-Adjusted Estimators
- Covariate-adjusted estimators are defined as statistical methods that leverage auxiliary covariates to improve the precision of treatment effect estimates.
- They use techniques such as regression, standardization, and machine learning to flexibly model outcomes, leading to efficiency gains in various experimental designs.
- These methods apply across randomized trials, regression discontinuity, and observational studies, with theoretical guarantees rooted in semiparametric theory.
Covariate-adjusted estimators constitute a class of statistical methods that leverage auxiliary covariates to enhance efficiency, reduce bias, or achieve unbiasedness in the estimation of treatment effects or other inferential targets. Their formulation spans randomized, stratified, and cluster-randomized experiments, pairwise comparison models, regression discontinuity designs, factorial experiments, and contexts with interference or complex outcome structures. The techniques combine semiparametric theory, influence-function corrections, and modern machine learning for the flexible estimation of nuisance parameters. This article presents a comprehensive account of their definitions, construction, theoretical properties, efficiency guarantees, and empirical applications.
1. Fundamental Definitions and Model Structure
Covariate-adjusted estimators are designed to exploit observed baseline variables that are predictive of the outcome but unaffected by the treatment assignment . In randomized experiments, the canonical estimand is the average treatment effect (ATE): with denoting potential outcomes. The unadjusted difference-in-means,
remains unbiased but may be inefficient when predicts (Masoero et al., 2023).
Covariate adjustment is implemented by modeling as a function of via regression (linear, GLM, or machine learning), or by standardization/g-computation: where estimates (Ye et al., 2023, Bartlett, 2017). These models can be extended to handle binary or ordinal outcomes, instrumental-variable settings