Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of General Large Language Models in Contextually Assessing Semantic Concepts Extracted from Adult Critical Care Electronic Health Record Notes (2401.13588v1)

Published 24 Jan 2024 in cs.CL, cs.AI, and cs.SE

Abstract: The field of healthcare has increasingly turned its focus towards LLMs due to their remarkable performance. However, their performance in actual clinical applications has been underexplored. Traditional evaluations based on question-answering tasks don't fully capture the nuanced contexts. This gap highlights the need for more in-depth and practical assessments of LLMs in real-world healthcare settings. Objective: We sought to evaluate the performance of LLMs in the complex clinical context of adult critical care medicine using systematic and comprehensible analytic methods, including clinician annotation and adjudication. Methods: We investigated the performance of three general LLMs in understanding and processing real-world clinical notes. Concepts from 150 clinical notes were identified by MetaMap and then labeled by 9 clinicians. Each LLM's proficiency was evaluated by identifying the temporality and negation of these concepts using different prompts for an in-depth analysis. Results: GPT-4 showed overall superior performance compared to other LLMs. In contrast, both GPT-3.5 and text-davinci-003 exhibit enhanced performance when the appropriate prompting strategies are employed. The GPT family models have demonstrated considerable efficiency, evidenced by their cost-effectiveness and time-saving capabilities. Conclusion: A comprehensive qualitative performance evaluation framework for LLMs is developed and operationalized. This framework goes beyond singular performance aspects. With expert annotations, this methodology not only validates LLMs' capabilities in processing complex medical data but also establishes a benchmark for future LLM evaluations across specialized domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Darren Liu (2 papers)
  2. Cheng Ding (16 papers)
  3. Delgersuren Bold (2 papers)
  4. Monique Bouvier (1 paper)
  5. Jiaying Lu (22 papers)
  6. Benjamin Shickel (24 papers)
  7. Craig S. Jabaley (1 paper)
  8. Wenhui Zhang (22 papers)
  9. Soojin Park (8 papers)
  10. Michael J. Young (1 paper)
  11. Mark S. Wainwright (1 paper)
  12. Gilles Clermont (4 papers)
  13. Parisa Rashidi (59 papers)
  14. Eric S. Rosenthal (1 paper)
  15. Laurie Dimisko (1 paper)
  16. Ran Xiao (12 papers)
  17. Joo Heung Yoon (4 papers)
  18. Carl Yang (130 papers)
  19. Xiao Hu (151 papers)
Citations (3)