How to Evaluate the Accuracy of Online and AI-Based Symptom Checkers: A Standardized Methodological Framework (2506.22379v1)
Abstract: Online and AI-based symptom checkers are applications that assist medical laypeople in diagnosing their symptoms and determining which course of action to take. When evaluating these tools, previous studies primarily used an approach introduced a decade ago that lacked any type of quality control. Numerous studies have criticized this approach, and several empirical studies have sought to improve specific aspects of evaluations. However, even after a decade, a high-quality methodological framework for standardizing the evaluation of symptom checkers remains missing. This article synthesizes empirical studies to outline a framework for standardized evaluations based on representative case selection, an externally and internally valid evaluation design, and metrics that increase cross-study comparability. This approach is backed up by several open-access resources to facilitate implementation. Ultimately, this approach should enhance the quality and comparability of future evaluations of online and AI-based symptom checkers to enable meta-analyses and help stakeholders make more informed decisions.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.