Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regularized calibrated estimation of propensity scores with model misspecification and high-dimensional data (1710.08074v1)

Published 23 Oct 2017 in stat.ME

Abstract: Propensity score methods are widely used for estimating treatment effects from observational studies. A popular approach is to estimate propensity scores by maximum likelihood based on logistic regression, and then apply inverse probability weighted estimators or extensions to estimate treatment effects. However, a challenging issue is that such inverse probability weighting methods including doubly robust methods can perform poorly even when the logistic model appears adequate as examined by conventional techniques. In addition, there is increasing difficulty to appropriately estimate propensity scores when dealing with a large number of covariates. To address these issues, we study calibrated estimation as an alternative to maximum likelihood estimation for fitting logistic propensity score models. We show that, with possible model misspecification, minimizing the expected calibration loss underlying the calibrated estimators involves reducing both the expected likelihood loss and a measure of relative errors which controls the mean squared errors of inverse probability weighted estimators. Furthermore, we propose a regularized calibrated estimator by minimizing the calibration loss with a Lasso penalty. We develop a novel Fisher scoring descent algorithm for computing the proposed estimator, and provide a high-dimensional analysis of the resulting inverse probability weighted estimators of population means, leveraging the control of relative errors for calibrated estimation. We present a simulation study and an empirical application to demonstrate the advantages of the proposed methods compared with maximum likelihood and regularization.

Summary

We haven't generated a summary for this paper yet.