Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery (2304.13714v3)

Published 26 Apr 2023 in cs.AI, cs.CL, and cs.IR

Abstract: Despite growing interest in using LLMs in healthcare, current explorations do not assess the real-world utility and safety of LLMs in clinical settings. Our objective was to determine whether two LLMs can serve information needs submitted by physicians as questions to an informatics consultation service in a safe and concordant manner. Sixty six questions from an informatics consult service were submitted to GPT-3.5 and GPT-4 via simple prompts. 12 physicians assessed the LLM responses' possibility of patient harm and concordance with existing reports from an informatics consultation service. Physician assessments were summarized based on majority vote. For no questions did a majority of physicians deem either LLM response as harmful. For GPT-3.5, responses to 8 questions were concordant with the informatics consult report, 20 discordant, and 9 were unable to be assessed. There were 29 responses with no majority on "Agree", "Disagree", and "Unable to assess". For GPT-4, responses to 13 questions were concordant, 15 discordant, and 3 were unable to be assessed. There were 35 responses with no majority. Responses from both LLMs were largely devoid of overt harm, but less than 20% of the responses agreed with an answer from an informatics consultation service, responses contained hallucinated references, and physicians were divided on what constitutes harm. These results suggest that while general purpose LLMs are able to provide safe and credible responses, they often do not meet the specific information need of a given question. A definitive evaluation of the usefulness of LLMs in healthcare settings will likely require additional research on prompt engineering, calibration, and custom-tailoring of general purpose models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Debadutta Dash (3 papers)
  2. Rahul Thapa (16 papers)
  3. Juan M. Banda (17 papers)
  4. Akshay Swaminathan (6 papers)
  5. Morgan Cheatham (1 paper)
  6. Mehr Kashyap (3 papers)
  7. Nikesh Kotecha (4 papers)
  8. Jonathan H. Chen (17 papers)
  9. Saurabh Gombar (3 papers)
  10. Lance Downing (4 papers)
  11. Rachel Pedreira (1 paper)
  12. Ethan Goh (4 papers)
  13. Angel Arnaout (1 paper)
  14. Garret Kenn Morris (1 paper)
  15. Honor Magon (1 paper)
  16. Eric Horvitz (76 papers)
  17. Nigam H. Shah (39 papers)
  18. Matthew P Lungren (6 papers)
Citations (42)