Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models Encode Clinical Knowledge (2212.13138v1)

Published 26 Dec 2022 in cs.CL
Large Language Models Encode Clinical Knowledge

Abstract: LLMs have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.

Overview of LLMs Encode Clinical Knowledge

The paper "LLMs Encode Clinical Knowledge" introduces groundbreaking advancements in assessing and utilizing LLMs for clinical applications. The paper presents the MultiMedQA benchmark, an extensive collection of seven diverse datasets, designed to evaluate the clinical knowledge of LLMs. These datasets include MedQA (USMLE), MedMCQA (AIIMS/NEET), PubMedQA, MMLU clinical topics, LiveQA, MedicationQA, and the newly introduced HealthSearchQA.

MultiMedQA Benchmark

MultiMedQA is a benchmark that combines six existing medical question-answering datasets and one new dataset, HealthSearchQA. The paper emphasizes the benchmark's utility in evaluating model predictions across a range of medical tasks, including professional medical exams, medical research queries, and consumer health questions. HealthSearchQA, curated from commonly searched health queries, adds a consumer-focused dimension to this comprehensive benchmark.

Evaluation of PaLM and Flan-PaLM

The authors evaluated PaLM, a 540-billion parameter LLM, and its instruction-tuned variant, Flan-PaLM, using MultiMedQA. The results showed that Flan-PaLM achieved state-of-the-art (SOTA) accuracy across multiple-choice datasets within the benchmark, significantly surpassing previous models. For instance, Flan-PaLM attained a 67.6% accuracy on the MedQA dataset, outperforming the prior SOTA by over 17%.

Human Evaluation Framework

To complement automated evaluations, the authors proposed a human evaluation framework assessing multiple axes such as factuality, precision, potential harm, and bias. This framework was crucial in uncovering key gaps in the LLM's responses, particularly in Flan-PaLM's long-form answers to consumer medical questions.

Instruction Prompt Tuning

To address these gaps, the paper introduces instruction prompt tuning, a data-efficient alignment method to optimize LLM performance in specific domains. The resulting model, Med-PaLM, underwent extensive human evaluation and demonstrated alignment improvements, with answers closely matching clinician-generated responses. Notably, Med-PaLM's responses aligned with scientific consensus 92.6% of the time and significantly reduced potentially harmful outcomes compared to Flan-PaLM.

Key Findings

  1. State-of-the-art Performance: Flan-PaLM achieved SOTA results across several datasets, including an impressive 67.6% accuracy in MedQA.
  2. Instruction Prompt Tuning Effectiveness: This method improved model alignment with the medical domain, as evidenced by Med-PaLM's performance.
  3. Human Evaluation Insights: The evaluation framework highlighted critical areas for improvement, emphasizing the need for both robust evaluation tools and cautious application in clinical contexts.

Implications and Future Developments

The findings suggest a promising trajectory for LLMs in clinical applications, indicating potential utility in tasks ranging from clinical decision support to patient education. However, the paper also underscores the necessity for ongoing evaluation frameworks and alignment techniques to ensure the safety and reliability of these models in practice. Future research could explore enhancements in fairness and bias mitigation, as well as multilingual support to expand applicability across diverse populations.

Conclusion

The paper represents a significant step forward in understanding and improving LLMs for clinical knowledge applications. While the current models like Med-PaLM show remarkable potential, the path to real-world clinical implementation will require continuous advancements in accuracy, safety, and ethical considerations. This paper lays a strong foundation for such future explorations, highlighting both the opportunities and the challenges that lie ahead.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (30)
  1. Karan Singhal (26 papers)
  2. Shekoofeh Azizi (23 papers)
  3. Tao Tu (45 papers)
  4. Jason Wei (49 papers)
  5. Hyung Won Chung (30 papers)
  6. Nathan Scales (8 papers)
  7. Ajay Tanwani (2 papers)
  8. Heather Cole-Lewis (6 papers)
  9. Stephen Pfohl (10 papers)
  10. Perry Payne (1 paper)
  11. Martin Seneviratne (6 papers)
  12. Paul Gamble (7 papers)
  13. Chris Kelly (5 papers)
  14. Nathaneal Scharli (1 paper)
  15. Aakanksha Chowdhery (19 papers)
  16. Philip Mansfield (24 papers)
  17. Blaise Aguera y Arcas (66 papers)
  18. Dale Webster (7 papers)
  19. Yossi Matias (61 papers)
  20. Katherine Chou (17 papers)
Citations (1,663)
Youtube Logo Streamline Icon: https://streamlinehq.com