Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey for Large Language Models in Biomedicine (2409.00133v1)

Published 29 Aug 2024 in cs.CL and cs.AI

Abstract: Recent breakthroughs in LLMs offer unprecedented natural language understanding and generation capabilities. However, existing surveys on LLMs in biomedicine often focus on specific applications or model architectures, lacking a comprehensive analysis that integrates the latest advancements across various biomedical domains. This review, based on an analysis of 484 publications sourced from databases including PubMed, Web of Science, and arXiv, provides an in-depth examination of the current landscape, applications, challenges, and prospects of LLMs in biomedicine, distinguishing itself by focusing on the practical implications of these models in real-world biomedical contexts. Firstly, we explore the capabilities of LLMs in zero-shot learning across a broad spectrum of biomedical tasks, including diagnostic assistance, drug discovery, and personalized medicine, among others, with insights drawn from 137 key studies. Then, we discuss adaptation strategies of LLMs, including fine-tuning methods for both uni-modal and multi-modal LLMs to enhance their performance in specialized biomedical contexts where zero-shot fails to achieve, such as medical question answering and efficient processing of biomedical literature. Finally, we discuss the challenges that LLMs face in the biomedicine domain including data privacy concerns, limited model interpretability, issues with dataset quality, and ethics due to the sensitive nature of biomedical data, the need for highly reliable model outputs, and the ethical implications of deploying AI in healthcare. To address these challenges, we also identify future research directions of LLM in biomedicine including federated learning methods to preserve data privacy and integrating explainable AI methodologies to enhance the transparency of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Chong Wang (308 papers)
  2. Mengyao Li (17 papers)
  3. Junjun He (77 papers)
  4. Zhongruo Wang (11 papers)
  5. Erfan Darzi (7 papers)
  6. Zan Chen (9 papers)
  7. Jin Ye (38 papers)
  8. Tianbin Li (20 papers)
  9. Yanzhou Su (26 papers)
  10. Jing Ke (7 papers)
  11. Kaili Qu (1 paper)
  12. Shuxin Li (19 papers)
  13. Yi Yu (223 papers)
  14. Pietro Liò (270 papers)
  15. Tianyun Wang (3 papers)
  16. Yu Guang Wang (59 papers)
  17. Yiqing Shen (53 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com