On the Asymptotics of the Minimax Linear Estimator (2510.16661v1)
Abstract: Many causal estimands, such as average treatment effects under unconfoundedness, can be written as continuous linear functionals of an unknown regression function. We study a weighting estimator that sets weights by a minimax procedure: solving a convex optimization problem that trades off worst-case conditional bias against variance. Despite its growing use, general root-$n$ theory for this method has been limited. This paper fills that gap. Under regularity conditions, we show that the minimax linear estimator is root-$n$ consistent and asymptotically normal, and we derive its asymptotic variance. These results justify ignoring worst-case bias when forming large-sample confidence intervals and make inference less sensitive to the scaling of the function class. With a mild variance condition, the estimator attains the semiparametric efficiency bound, so an augmentation step commonly used in the literature is not needed to achieve first-order optimality. Evidence from simulations and three empirical applications, including job-training and minimum-wage policies, points to a simple rule: in designs satisfying our regularity conditions, standard-error confidence intervals suffice; otherwise, bias-aware intervals remain important.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.