Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is larger always better? Evaluating and prompting large language models for non-generative medical tasks (2407.18525v1)

Published 26 Jul 2024 in cs.CL, cs.AI, and cs.LG

Abstract: The use of LLMs in medicine is growing, but their ability to handle both structured Electronic Health Record (EHR) data and unstructured clinical notes is not well-studied. This study benchmarks various models, including GPT-based LLMs, BERT-based models, and traditional clinical predictive models, for non-generative medical tasks utilizing renowned datasets. We assessed 14 LLMs (9 GPT-based and 5 BERT-based) and 7 traditional predictive models using the MIMIC dataset (ICU patient records) and the TJH dataset (early COVID-19 EHR data), focusing on tasks such as mortality and readmission prediction, disease hierarchy reconstruction, and biomedical sentence matching, comparing both zero-shot and finetuned performance. Results indicated that LLMs exhibited robust zero-shot predictive capabilities on structured EHR data when using well-designed prompting strategies, frequently surpassing traditional models. However, for unstructured medical texts, LLMs did not outperform finetuned BERT models, which excelled in both supervised and unsupervised tasks. Consequently, while LLMs are effective for zero-shot learning on structured data, finetuned BERT models are more suitable for unstructured texts, underscoring the importance of selecting models based on specific task requirements and data characteristics to optimize the application of NLP technology in healthcare.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yinghao Zhu (45 papers)
  2. Junyi Gao (20 papers)
  3. Zixiang Wang (17 papers)
  4. Weibin Liao (9 papers)
  5. Xiaochen Zheng (29 papers)
  6. Lifang Liang (1 paper)
  7. Yasha Wang (47 papers)
  8. Chengwei Pan (30 papers)
  9. Ewen M. Harrison (4 papers)
  10. Liantao Ma (23 papers)
Citations (1)