Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Statistical tests for the intersection of independent lists of genes: Sensitivity, FDR, and type I error control (1206.6636v1)

Published 28 Jun 2012 in stat.AP

Abstract: Public data repositories have enabled researchers to compare results across multiple genomic studies in order to replicate findings. A common approach is to first rank genes according to an hypothesis of interest within each study. Then, lists of the top-ranked genes within each study are compared across studies. Genes recaptured as highly ranked (usually above some threshold) in multiple studies are considered to be significant. However, this comparison strategy often remains informal, in that type I error and false discovery rate (FDR) are usually uncontrolled. In this paper, we formalize an inferential strategy for this kind of list-intersection discovery test. We show how to compute a $p$-value associated with a "recaptured" set of genes, using a closed-form Poisson approximation to the distribution of the size of the recaptured set. We investigate operating characteristics of the test as a function of the total number of studies considered, the rank threshold within each study, and the number of studies within which a gene must be recaptured to be declared significant. We investigate the trade off between FDR control and expected sensitivity (the expected proportion of true-positive genes identified as significant). We give practical guidance on how to design a bioinformatic list-intersection study with maximal expected sensitivity and prespecified control of type I error (at the set level) and false discovery rate (at the gene level). We show how optimal choice of parameters may depend on particular alternative hypothesis which might hold. We illustrate our methods using prostate cancer gene-expression datasets from the curated Oncomine database, and discuss the effects of dependence between genes on the test.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube