Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Me LLaMA: Foundation Large Language Models for Medical Applications (2402.12749v5)

Published 20 Feb 2024 in cs.CL and cs.AI

Abstract: Recent advancements in LLMs like ChatGPT and LLaMA show promise in medical applications, yet challenges remain in medical language comprehension. This study presents Me-LLaMA, a new medical LLM family based on open-source LLaMA models, optimized for medical text analysis and diagnosis by leveraging large-scale, domain-specific datasets. The Me-LLaMA family, including foundation models Me-LLaMA 13/70B and their chat-enhanced versions, was developed through continued pre-training and instruction tuning with 129B tokens and 214K samples from biomedical and clinical sources. Training the 70B models required over 100,000 A100 GPU hours. Me-LLaMA's performance was evaluated across six medical text analysis tasks using 12 benchmark datasets and complex clinical case diagnosis, with automatic and human evaluations. Results indicate Me-LLaMA outperforms LLaMA and other open-source medical LLMs in zero-shot and supervised settings. Task-specific tuning further boosts performance, surpassing ChatGPT on 7 of 8 datasets and GPT-4 on 5 of 8. For complex clinical cases, Me-LLaMA achieves performance comparable to ChatGPT and GPT-4. This work underscores the importance of domain-specific data in developing medical LLMs and addresses the high computational costs involved in training, highlighting a balance between pre-training and fine-tuning strategies. Me-LLaMA models are now accessible under user agreements, providing a valuable resource for advancing medical AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Qianqian Xie (60 papers)
  2. Qingyu Chen (57 papers)
  3. Aokun Chen (12 papers)
  4. Cheng Peng (177 papers)
  5. Yan Hu (75 papers)
  6. Fongci Lin (3 papers)
  7. Xueqing Peng (12 papers)
  8. Jimin Huang (37 papers)
  9. Jeffrey Zhang (26 papers)
  10. Vipina Keloth (1 paper)
  11. Huan He (45 papers)
  12. Yonghui Wu (115 papers)
  13. Hua Xu (78 papers)
  14. Jiang Bian (229 papers)
  15. Xinyu Zhou (82 papers)
  16. Lucila Ohno-Machado (12 papers)
  17. Lingfei Qian (10 papers)
  18. Dennis Shung (13 papers)