Papers
Topics
Authors
Recent
2000 character limit reached

A (tight) upper bound for the length of confidence intervals with conditional coverage (2007.12448v3)

Published 24 Jul 2020 in stat.ME

Abstract: We show that two popular selective inference procedures, namely data carving (Fithian et al., 2017) and selection with a randomized response (Tian et al., 2018b), when combined with the polyhedral method (Lee et al., 2016), result in confidence intervals whose length is bounded. This contrasts results for confidence intervals based on the polyhedral method alone, whose expected length is typically infinite (Kivaranovic and Leeb, 2020). Moreover, we show that these two procedures always dominate corresponding sample-splitting methods in terms of interval length.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Valid confidence intervals for post-model-selection predictors. Annals of Statistics, 47:1475–1504.
  2. Uniformly valid confidence intervals post-model-selection. Annals of Statistics, 48:440–463.
  3. Valid post-selection inference. Annals of Statistics, 41:802–837.
  4. Optimal inference after model selection. arXiv preprint arxiv:1410.2597.
  5. A statistical view of some chemometrics regression tools. Technometrics, 35:109–135.
  6. Post-selection estimation and testing following aggregate association tests. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81:547–573.
  7. Group sequential methods with applications to clinical trials. Chapman & Hall/CRC, Boca Raton, FL.
  8. On the length of post-model-selection confidence intervals conditional on polyhedral constraints. Journal of the American Statistical Association, 534:845–857.
  9. A model free perspective for linear regression: Uniform-in-model bounds for post selection inference. arXiv preprint arXiv:1802.05801.
  10. Valid post-selection inference in assumption-lean linear regression. arXiv preprint arXiv:1806.04119.
  11. Exact post-selection inference, with application to the lasso. Annals of Statistics, 44:907–927.
  12. Model selection and inference: Facts and fiction. Econometric Theory, 21:21–59.
  13. Can one estimate the conditional distribution of post-model-selection estimators? Annals of Statistics, 34:2554–2591.
  14. Can one estimate the unconditional distribution of post-model-selection estimators? Econometric Theory, 24:338–376.
  15. Testing statistical hypotheses. Springer Science & Business Media.
  16. Unifying approach to selective inference with applications to cross-validation. arXiv preprint arXiv:1703.06559.
  17. Owen, D. B. (1980). A table of normal integrals. Communications in Statistics - Simulation and Computation, 9:389–419.
  18. Approximate selective inference via maximum-likelihood. arXiv preprint arXiv:1902.07884.
  19. Selection-adjusted inference: an application to confidence intervals for c⁢i⁢s𝑐𝑖𝑠cisitalic_c italic_i italic_s-eQTL effect sizes. arXiv preprint arXiv:1801.08686.
  20. Post-selection point and interval estimation of signal sizes in gaussian samples. Canadian Journal of Statistics, 45:128–148.
  21. A general framework for estimation and inference from clusters of features. Journal of the American Statistical Association, 113:280–293.
  22. Rosenthal, R. (1979). The “File Drawer Problem” and tolerance for null results. Psychol. Bull., 86:638–641.
  23. Post-selection inference for l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-penalized likelihood models. Canadian Journal of Statistics, 46:41–61.
  24. Selective inference with unknown variance via the square-root lasso. Biometrika, 105:755–768.
  25. Selective sampling after solving a convex problem. arXiv preprint arXiv:1609.05609.
  26. Asymptotics of selective inference. Scandinavian Journal of Statistics, 44:480–499.
  27. Selective inference with a randomized response. Annals of Statistics, 46:679–710.
  28. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58:267–288.
  29. Tibshirani, R. J. (2013). The lasso problem and uniqueness. Electronic Journal of Statistics, 7:1456–1490.
  30. Exact post-selection inference for sequential regression procedures. Journal of the American Statistical Association, 111:600–620.
  31. Post-selection inference via algorithmic stability. arXiv preprint arXiv:2011.09462.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube