Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

False Discovery Rate and Localizing Power (2401.03554v1)

Published 7 Jan 2024 in stat.ME and stat.AP

Abstract: False discovery rate (FDR) is commonly used for correction for multiple testing in neuroimaging studies. However, when using two-tailed tests, making directional inferences about the results can lead to vastly inflated error rate, even approaching 100\% in some cases. This happens because FDR only provides weak control over the error rate, meaning that the proportion of error is guaranteed only globally over all tests, not within subsets, such as among those in only one or another direction. Here we consider and evaluate different strategies for FDR control with two-tailed tests, using both synthetic and real imaging data. Approaches that separate the tests by direction of the hypothesis test, or by the direction of the resulting test statistic, more properly control the directional error rate and preserve FDR benefits, albeit with a doubled risk of errors under complete absence of signal. Strategies that combine tests in both directions, or that use simple two-tailed p-values, can lead to invalid directional conclusions, even if these tests remain globally valid. To enable valid thresholding for directional inference, we suggest that imaging software should allow the possibility that the user sets asymmetrical thresholds for the two sides of the statistical map. While FDR continues to be a valid, powerful procedure for multiple testing correction, care is needed when making directional inferences for two-tailed tests, or more broadly, when making any localized inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Multiple testing correction over contrasts for brain imaging. NeuroImage 216, 116760. doi:10.1016/j.neuroimage.2020.116760.
  2. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 57, 289–300. doi:10.1111/j.2517-6161.1995.tb02031.x.
  3. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics. Journal of Educational and Behavioral Statistics 25, 60–83. doi:10.3102/107699860250010.
  4. Adaptive linear step-up procedures that control the false discovery rate. Biometrika 93, 491–507. doi:10.1093/biomet/93.3.491.
  5. The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics 29, 1165–1188. doi:10.1214/aos/1013699998.
  6. Teoria statistica delle classi e calcolo delle probabilità. Pubblicazioni del R. Istituto Superiore di Scienze Economiche e Commericiali di Firenze 8, 3–62.
  7. Variability in the analysis of a single neuroimaging dataset by many teams. Nature 582, 84–88. doi:10.1038/s41586-020-2314-9.
  8. A tail of two sides: Artificially doubled false positive rates in neuroimaging due to the sidedness choice with t-tests. Human Brain Mapping 40, 1037–1043. doi:10.1002/hbm.24399.
  9. Adaptive control of the false discovery rate in voxel-based morphometry. Human Brain Mapping 30, 2304–2311. doi:10.1002/hbm.20669.
  10. False discovery rate revisited: FDR and topological inference using Gaussian random fields. NeuroImage 44, 62–70. doi:10.1016/j.neuroimage.2008.05.021.
  11. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, an International Journal 29, 162–173. doi:10.1006/cbmr.1996.0014.
  12. Multiple comparisons among means. Journal of the American Statistical Association 56, 52–64. doi:10.2307/2282330.
  13. New FDR bounds for discrete and heterogeneous tests. Electronic Journal of Statistics 12. doi:10.1214/18-EJS1441.
  14. Simultaneous inference: When should hypothesis testing problems be combined? Annals of Applied Statistics 2, 197–223. doi:10.1214/07-AOAS141.
  15. Statistical Methods for Research Workers. Oliver & Boyd, Edinburgh.
  16. Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold. Magnetic Resonance in Medicine 33, 636–47. URL: http://www.ncbi.nlm.nih.gov/pubmed/7596267, doi:10.1002/mrm.1910330508.
  17. Thresholding of statistical maps in functional neuroimaging using the false discovery rate. NeuroImage 15, 870–8. doi:10.1006/nimg.2001.1037.
  18. Statisticians of the centuries. Springer, New York, NY.
  19. Multiple Comparison Procedures. 1st ed., John Wiley & Sons, Inc.
  20. Nonparametric Statistical Methods. 3 ed., John Wiley and Sons, New York.
  21. Nonparametric analysis of statistic images from functional mapping experiments. Journal of Cerebral Blood Flow and Metabolism 16, 7–22. doi:10.1097/00004647-199601000-00002.
  22. A framework for positive dependence. Annals of the Institute of Statistical Mathematics 41, 31–45. doi:10.1007/BF00049108.
  23. A practical guide to methods controlling false discoveries in computational biology. Genome Biology 20, 118. doi:10.1186/s13059-019-1716-1.
  24. An evaluation of thresholding techniques in fMRI analysis. NeuroImage 22, 95–108. doi:10.1016/j.neuroimage.2003.12.047.
  25. False discovery rate procedures, in: Friston, K., Ashburner, J., Kiebel, S., Nichols, T., Penny, W. (Eds.), Statistical Parametric Mapping. Academic Press, London, pp. 246–252. doi:https://doi.org/10.1016/B978-012372560-8/50020-6.
  26. Controlling the familywise error rate in functional neuroimaging: a comparative review. Statistical methods in medical research 12, 419–46. doi:10.1191/0962280203sm341ra.
  27. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping 15, 1–25. doi:10.1002/hbm.1058.
  28. Computational and informatic advances for reproducible data analysis in neuroimaging. Annual Review of Biomedical Data Science 2, 119–138. doi:10.1146/annurev-biodatasci-072018-021237.
  29. Analysis of Individual Positron Emission Tomography Activation Maps by Detection of High Signal-to-Noise-Ratio Pixel Clusters. Journal of Cerebral Blood Flow & Metabolism 13, 425–437. doi:10.1038/jcbfm.1993.57.
  30. A primer on strong vs weak control of familywise error rate. Statistics in Medicine 39, 1407–1413. doi:10.1002/sim.8463.
  31. FDR control by the BH procedure for two-sided correlated tests with implications to gene expression data analysis. Biometrical Journal 49, 107–126. doi:10.1002/bimj.200510313.
  32. Paradoxical results of adaptive false discovery rate procedures in neuroimaging studies. NeuroImage 63, 1833–1840. doi:10.1016/j.neuroimage.2012.07.040.
  33. Some lower bounds of reliability. Technical Report 124. Stanford University.
  34. Resampling-Based Multiple Testing: Examples And Methods for p-Value Adjustment. John Wiley and Sons, New York.
  35. Faster permutation inference in brain imaging. NeuroImage 141, 502–516. doi:10.1016/j.neuroimage.2016.05.068.
  36. Permutation inference for the general linear model. NeuroImage 92, 381–97. doi:10.1016/j.neuroimage.2014.01.060.
  37. A unified statistical approach for determining significant signals in images of cerebral activation. Human Brain Mapping 4, 58–73. doi:10.1002/(SICI)1097-0193(1996)4:1<58::AID-HBM4>3.0.CO;2-O.
  38. Unified univariate and multivariate random field theory. NeuroImage 23 Suppl 1, S189–95. doi:10.1016/j.neuroimage.2004.07.026.
  39. Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. Journal of Statistical Planning and Inference 82, 171–196. doi:10.1016/S0378-3758(99)00041-5.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com