Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Superhuman performance of a large language model on the reasoning tasks of a physician (2412.10849v2)

Published 14 Dec 2024 in cs.AI and cs.CL
Superhuman performance of a large language model on the reasoning tasks of a physician

Abstract: A seminal paper published by Ledley and Lusted in 1959 introduced complex clinical diagnostic reasoning cases as the gold standard for the evaluation of expert medical computing systems, a standard that has held ever since. Here, we report the results of a physician evaluation of a LLM on challenging clinical cases against a baseline of hundreds of physicians. We conduct five experiments to measure clinical reasoning across differential diagnosis generation, display of diagnostic reasoning, triage differential diagnosis, probabilistic reasoning, and management reasoning, all adjudicated by physician experts with validated psychometrics. We then report a real-world study comparing human expert and AI second opinions in randomly-selected patients in the emergency room of a major tertiary academic medical center in Boston, MA. We compared LLMs and board-certified physicians at three predefined diagnostic touchpoints: triage in the emergency room, initial evaluation by a physician, and admission to the hospital or intensive care unit. In all experiments--both vignettes and emergency room second opinions--the LLM displayed superhuman diagnostic and reasoning abilities, as well as continued improvement from prior generations of AI clinical decision support. Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning, fulfilling the vision put forth by Ledley and Lusted, and motivating the urgent need for prospective trials.

Performance Evaluation of a LLM on Medical Reasoning Tasks

The paper "Superhuman performance of a LLM on the reasoning tasks of a physician" provides an in-depth analysis of the diagnostic capabilities of the OpenAI's o1-preview model compared to previous LLMs such as GPT-4, human physicians, and other diagnostic systems. The authors aim to evaluate the proficiency of the o1-preview model in performing clinical reasoning tasks that are essential in medical practice. This paper contributes to the ongoing discourse on the application of AI in healthcare by measuring performance across various research paradigms, involving both frequently encountered and intricate medical cases.

Methodology and Experiments

The authors conducted five distinct experiments to measure the competency of the o1-preview model in medical reasoning domains, including differential diagnosis generation, reasoning presentation, diagnostic test selection, probabilistic reasoning, and decision-making in medical management. These experiments utilized clinical vignettes from recognized sources such as the New England Journal of Medicine (NEJM) Clinicopathological Conferences and other landmark studies.

Physician adjudicators' evaluations provided benchmarks for historical human performance and were used to assess the AI's outputs through validated psychometric instruments such as the Revised-IDEA (R-IDEA) score and the Bond Score. o1-preview's ability to execute complex multi-step reasoning through run-time chain-of-thought (CoT) processes enabled these assessments.

Results

The results demonstrated that the o1-preview model substantially outperformed not only GPT-4 but also clinicians across several evaluated tasks. For the differential diagnosis generation, the model successfully included the correct diagnosis in its differential in 78.3% of the cases. This result marked a significant leap over GPT-4's prior performance. Furthermore, o1-preview achieved a near-perfect R-IDEA score in a substantial number of cases when documenting clinical reasoning.

When it came to selecting diagnostic tests, o1-preview's suggestions aligned with the actual management plan of the patient cases in 87.5% of the scenarios. This accord is particularly noteworthy given the nuanced nature of medical testing decisions.

In contrast, the model's capability to engage in probabilistic reasoning was on par with that of GPT-4, without observable improvement. This highlights an area where human intuition and expertise might still hold an advantage due to the abstract nature of probabilistic prediction.

Discussion

The research underscores the trend of LLMs like o1-preview challenging human physicians in domains that require elaborate decision-making and synthetically merging disparate knowledge sources. The authors caution that current benchmarks might reach saturation, suggesting the need for more robust and scalable evaluation techniques that mirror real clinical environments, thereby amplifying LLMs' utility in medical applications.

Practical implications suggest a transformative role for LLMs in reducing human error in diagnostics and augmenting healthcare resources efficiently. The authors call for trials that embed these AI models in clinical workflows, emphasizing the importance of effective human-computer interaction, which may redefine conventional clinical decision-making.

Conclusions

The o1-preview model presents a significant advancement in AI-driven clinical reasoning, surpassing historical control performances and enhancing diagnostic accuracy. The paper puts forward the necessity for developing advanced evaluation frameworks to fully integrate such models into medical practice effectively. The findings suggest prospective enhancements in patient care outcomes contingent on the synergistic collaboration between AI systems and medical professionals. Future AI advancements and their subsequent integration into medical practice may necessitate substantial intervention through training and technology development to optimize these innovations' patient-centered impacts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (25)
  1. Peter G. Brodeur (1 paper)
  2. Thomas A. Buckley (1 paper)
  3. Zahir Kanjee (1 paper)
  4. Ethan Goh (4 papers)
  5. Evelyn Bin Ling (1 paper)
  6. Priyank Jain (2 papers)
  7. Stephanie Cabral (1 paper)
  8. Raja-Elie Abdulnour (1 paper)
  9. Jason A. Freed (1 paper)
  10. Andrew Olson (1 paper)
  11. Daniel J. Morgan (1 paper)
  12. Jason Hom (6 papers)
  13. Robert Gallo (3 papers)
  14. Eric Horvitz (76 papers)
  15. Jonathan Chen (11 papers)
  16. Arjun K. Manrai (5 papers)
  17. Adam Rodman (6 papers)
  18. Adrian D. Haimovich (4 papers)
  19. Liam G. McCoy (3 papers)
  20. Haadi Mombini (1 paper)
Youtube Logo Streamline Icon: https://streamlinehq.com