Answering real-world clinical questions using large language model based systems (2407.00541v1)
Abstract: Evidence to guide healthcare decisions is often limited by a lack of relevant and trustworthy literature as well as difficulty in contextualizing existing research for a specific patient. LLMs could potentially address both challenges by either summarizing published literature or generating new studies based on real-world data (RWD). We evaluated the ability of five LLM-based systems in answering 50 clinical questions and had nine independent physicians review the responses for relevance, reliability, and actionability. As it stands, general-purpose LLMs (ChatGPT-4, Claude 3 Opus, Gemini Pro 1.5) rarely produced answers that were deemed relevant and evidence-based (2% - 10%). In contrast, retrieval augmented generation (RAG)-based and agentic LLM systems produced relevant and evidence-based answers for 24% (OpenEvidence) to 58% (ChatRWD) of questions. Only the agentic ChatRWD was able to answer novel questions compared to other LLMs (65% vs. 0-9%). These results suggest that while general-purpose LLMs should not be used as-is, a purpose-built system for evidence summarization based on RAG and one for generating novel evidence working synergistically would improve availability of pertinent evidence for patient care.
- Yen Sia Low (1 paper)
- Michael L. Jackson (2 papers)
- Rebecca J. Hyde (1 paper)
- Robert E. Brown (1 paper)
- Neil M. Sanghavi (1 paper)
- Julian D. Baldwin (1 paper)
- C. William Pike (1 paper)
- Jananee Muralidharan (1 paper)
- Gavin Hui (1 paper)
- Natasha Alexander (1 paper)
- Hadeel Hassan (1 paper)
- Rahul V. Nene (1 paper)
- Morgan Pike (1 paper)
- Courtney J. Pokrzywa (1 paper)
- Shivam Vedak (2 papers)
- Adam Paul Yan (1 paper)
- Dong-han Yao (2 papers)
- Amy R. Zipursky (1 paper)
- Christina Dinh (1 paper)
- Philip Ballentine (1 paper)