Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-MedQA: Enhancing Medical Question Answering through Case Studies in Large Language Models (2501.05464v1)

Published 31 Dec 2024 in cs.CL, cs.AI, and cs.IR

Abstract: Accurate and efficient question-answering systems are essential for delivering high-quality patient care in the medical field. While LLMs have made remarkable strides across various domains, they continue to face significant challenges in medical question answering, particularly in understanding domain-specific terminologies and performing complex reasoning. These limitations undermine their effectiveness in critical medical applications. To address these issues, we propose a novel approach incorporating similar case generation within a multi-agent medical question-answering (MedQA) system. Specifically, we leverage the Llama3.1:70B model, a state-of-the-art LLM, in a multi-agent architecture to enhance performance on the MedQA dataset using zero-shot learning. Our method capitalizes on the model's inherent medical knowledge and reasoning capabilities, eliminating the need for additional training data. Experimental results show substantial performance gains over existing benchmark models, with improvements of 7% in both accuracy and F1-score across various medical QA tasks. Furthermore, we examine the model's interpretability and reliability in addressing complex medical queries. This research not only offers a robust solution for medical question answering but also establishes a foundation for broader applications of LLMs in the medical domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hang Yang (70 papers)
  2. Hao Chen (1005 papers)
  3. Hui Guo (49 papers)
  4. Yineng Chen (3 papers)
  5. Ching-Sheng Lin (1 paper)
  6. Shu Hu (63 papers)
  7. Jinrong Hu (23 papers)
  8. Xi Wu (100 papers)
  9. Xin Wang (1306 papers)