Papers
Topics
Authors
Recent
2000 character limit reached

How to Evaluate the Accuracy of Online and AI-Based Symptom Checkers: A Standardized Methodological Framework (2506.22379v1)

Published 27 Jun 2025 in cs.HC

Abstract: Online and AI-based symptom checkers are applications that assist medical laypeople in diagnosing their symptoms and determining which course of action to take. When evaluating these tools, previous studies primarily used an approach introduced a decade ago that lacked any type of quality control. Numerous studies have criticized this approach, and several empirical studies have sought to improve specific aspects of evaluations. However, even after a decade, a high-quality methodological framework for standardizing the evaluation of symptom checkers remains missing. This article synthesizes empirical studies to outline a framework for standardized evaluations based on representative case selection, an externally and internally valid evaluation design, and metrics that increase cross-study comparability. This approach is backed up by several open-access resources to facilitate implementation. Ultimately, this approach should enhance the quality and comparability of future evaluations of online and AI-based symptom checkers to enable meta-analyses and help stakeholders make more informed decisions.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.