Papers
Topics
Authors
Recent
Search
2000 character limit reached

Convergence rate for Nearest Neighbour matching: geometry of the domain and higher-order regularity

Published 30 Apr 2025 in math.ST and stat.TH | (2504.21633v1)

Abstract: Estimating some mathematical expectations from partially observed data and in particular missing outcomes is a central problem encountered in numerous fields such as transfer learning, counterfactual analysis or causal inference. Matching estimators, estimators based on k-nearest neighbours, are widely used in this context. It is known that the variance of such estimators can converge to zero at a parametric rate, but their bias can have a slower rate when the dimension of the covariates is larger than 2. This makes analysis of this bias particularly important. In this paper, we provide higher order properties of the bias. In contrast to the existing literature related to this problem, we do not assume that the support of the target distribution of the covariates is strictly included in that of the source, and we analyse two geometric conditions on the support that avoid such boundary bias problems. We show that these conditions are much more general than the usual convex support assumption, leading to an improvement of existing results. Furthermore, we show that the matching estimator studied by Abadie and Imbens (2006) for the average treatment effect can be asymptotically efficient when the dimension of the covariates is less than 4, a result only known in dimension 1.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.