Generalist LLMs, such as GPT-4, have shown considerable promise in various domains, including medical diagnosis. Rare diseases, affecting approximately 300 million people worldwide, often have unsatisfactory clinical diagnosis rates primarily due to a lack of experienced physicians and the complexity of differentiating among many rare diseases. In this context, recent news such as "ChatGPT correctly diagnosed a 4-year-old's rare disease after 17 doctors failed" underscore LLMs' potential, yet underexplored, role in clinically diagnosing rare diseases. To bridge this research gap, we introduce RareBench, a pioneering benchmark designed to systematically evaluate the capabilities of LLMs on 4 critical dimensions within the realm of rare diseases. Meanwhile, we have compiled the largest open-source dataset on rare disease patients, establishing a benchmark for future studies in this domain. To facilitate differential diagnosis of rare diseases, we develop a dynamic few-shot prompt methodology, leveraging a comprehensive rare disease knowledge graph synthesized from multiple knowledge bases, significantly enhancing LLMs' diagnostic performance. Moreover, we present an exhaustive comparative study of GPT-4's diagnostic capabilities against those of specialist physicians. Our experimental findings underscore the promising potential of integrating LLMs into the clinical diagnostic process for rare diseases. This paves the way for exciting possibilities in future advancements in this field.
The paper introduces RareBench, a benchmark to evaluate LLMs like GPT-4 in diagnosing rare diseases.
It utilizes a dynamic few-shot prompt methodology and a comprehensive rare disease knowledge graph to enhance the diagnostic capabilities of LLMs.
Experimental results show GPT-4's performance on par with senior specialists in rare disease diagnosis, especially in differential diagnosis among universal rare diseases.
The study suggests LLMs, specifically GPT-4, could significantly support rare disease diagnosis, potentially extending specialist knowledge to more generalized medical settings.
LLMs like GPT-4 have exhibited promising capabilities in various domains, including healthcare. Given their extensive knowledge base, there's potential for these models to assist in the diagnosis of rare diseases, which is a significant challenge due to their low prevalence and the shortage of specialized knowledge among general practitioners. This paper introduces "RareBench," a pioneering benchmark designed to systematically evaluate the capabilities of LLMs in diagnosing rare diseases.
RareBench is constructed on the foundation of the largest open-source dataset on rare disease patients. It assesses LLMs across four critical dimensions: phenotype extraction from electronic health records (EHRs), screening for specific rare diseases, comparative analysis of common and rare diseases, and differential diagnosis among universal rare diseases. Additionally, the study employs a dynamic few-shot prompt methodology, leveraging a comprehensive rare disease knowledge graph synthesized from multiple knowledge bases. This approach significantly boosts the diagnostic capabilities of LLMs by enhancing their understanding of the complex relationship between phenotypes and rare diseases.
The study's experimental findings underscore the promising potential of integrating LLMs into the clinical diagnostic process for rare diseases. GPT-4, in particular, demonstrated capabilities on par with senior doctors across various specialties in the differential diagnosis of rare diseases. This was achieved through the development and application of a rare disease knowledge integration dynamic few-shot prompting strategy, which marks a significant stride in utilizing LLMs for complex clinical scenarios.
This comprehensive evaluation reveals significant insights and implications for broader AI applications in healthcare. The introduction of RareBench offers a structured framework to rigorously assess and refine the diagnostic acumen of LLMs in rare diseases. The results notably highlight GPT-4's competency, rivaling that of experienced specialists, particularly when augmented by dynamic few-shot prompts grounded in an integrated rare disease knowledge graph. This suggests a potential paradigm shift in rare disease diagnosis, where LLMs could augment or even extend the reach of specialist knowledge to generalist settings.
The integration of LLMs like GPT-4 in aiding the diagnosis of rare diseases opens new avenues for research and application. Future developments could focus on:
RareBench marks a significant advancement in evaluating LLMs' utility in diagnosing rare diseases. The findings from this study pave the way for future innovations in AI-assisted diagnostics, promising to bridge the gap in medical expertise and improve outcomes for patients with rare diseases.