Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Performance of large language models in numerical vs. semantic medical knowledge: Benchmarking on evidence-based Q&As (2406.03855v3)

Published 6 Jun 2024 in cs.CL

Abstract: Clinical problem-solving requires processing of semantic medical knowledge such as illness scripts and numerical medical knowledge of diagnostic tests for evidence-based decision-making. As LLMs show promising results in many aspects of language-based clinical practice, their ability to generate non-language evidence-based answers to clinical questions is inherently limited by tokenization. Therefore, we evaluated LLMs' performance on two question types: numeric (correlating findings) and semantic (differentiating entities) while examining differences within and between LLMs in medical aspects and comparing their performance to humans. To generate straightforward multi-choice questions and answers (QAs) based on evidence-based medicine (EBM), we used a comprehensive medical knowledge graph (encompassed data from more than 50,00 peer-reviewed articles) and created the "EBMQA". EBMQA contains 105,000 QAs labeled with medical and non-medical topics and classified into numerical or semantic questions. We benchmarked this dataset using more than 24,500 QAs on two state-of-the-art LLMs: Chat-GPT4 and Claude3-Opus. We evaluated the LLMs accuracy on semantic and numerical question types and according to sub-labeled topics. For validation, six medical experts were tested on 100 numerical EBMQA questions. We found that both LLMs excelled more in semantic than numerical QAs, with Claude3 surpassing GPT4 in numerical QAs. However, both LLMs showed inter and intra gaps in different medical aspects and remained inferior to humans. Thus, their medical advice should be addressed carefully.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Eden Avnat (1 paper)
  2. Michal Levy (1 paper)
  3. Daniel Herstain (1 paper)
  4. Elia Yanko (1 paper)
  5. Daniel Ben Joya (1 paper)
  6. Michal Tzuchman Katz (1 paper)
  7. Dafna Eshel (1 paper)
  8. Sahar Laros (1 paper)
  9. Yael Dagan (1 paper)
  10. Shahar Barami (1 paper)
  11. Joseph Mermelstein (1 paper)
  12. Shahar Ovadia (2 papers)
  13. Noam Shomron (3 papers)
  14. Varda Shalev (1 paper)
  15. Raja-Elie E. Abdulnour (1 paper)
Citations (1)