Analysis of Diagnostics (Part I): Prevalence, Uncertainty Quantification, and Machine Learning
Abstract: Diagnostic testing provides a unique setting for studying and developing tools in classification theory. In such contexts, the concept of prevalence, i.e. the number of individuals with a given condition, is fundamental, both as an inherent quantity of interest and as a parameter that controls classification accuracy. This manuscript is the first in a two-part series that studies deeper connections between classification theory and prevalence, showing how the latter establishes a more complete theory of uncertainty quantification (UQ) for certain types of ML. We motivate this analysis via a lemma demonstrating that general classifiers minimizing a prevalence-weighted error contain the same probabilistic information as Bayes-optimal classifiers, which depend on conditional probability densities. This leads us to study relative probability level-sets $B\star (q)$, which are reinterpreted as both classification boundaries and useful tools for quantifying uncertainty in class labels. To realize this in practice, we also propose a numerical, homotopy algorithm that estimates the $B\star (q)$ by minimizing a prevalence-weighted empirical error. The successes and shortcomings of this method motivate us to revisit properties of the level sets, and we deduce the corresponding classifiers obey a useful monotonicity property that stabilizes the numerics and points to important extensions to UQ of ML. Throughout, we validate our methods in the context of synthetic data and a research-use-only SARS-CoV-2 enzyme-linked immunosorbent (ELISA) assay.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.