Papers
Topics
Authors
Recent
Search
2000 character limit reached

New Confidence Intervals and Bias Comparisons Show that Maximum Likelihood Can Beat Multiple Imputation in Small Samples

Published 22 Jul 2013 in stat.ME | (1307.5875v7)

Abstract: When analyzing incomplete data, is it better to use multiple imputation (MI) or full information maximum likelihood (ML)? In large samples ML is clearly better, but in small samples ML's usefulness has been limited because ML commonly uses normal test statistics and confidence intervals that require large samples. We propose small-sample t-based ML confidence intervals that have good coverage and are shorter than t-based confidence intervals under MI. We also show that ML point estimates are less biased and more efficient than MI point estimates in small samples of bivariate normal data. With our new confidence intervals, ML should be preferred over MI, even in small samples, whenever both options are available.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.