Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pre-trained Language Models in Biomedical Domain: A Systematic Survey (2110.05006v4)

Published 11 Oct 2021 in cs.CL

Abstract: Pre-trained LLMs (PLMs) have been the de facto paradigm for most NLP tasks. This also benefits biomedical domain: researchers from informatics, medicine, and computer science (CS) communities propose various PLMs trained on biomedical datasets, e.g., biomedical text, electronic health records, protein, and DNA sequences for various biomedical tasks. However, the cross-discipline characteristics of biomedical PLMs hinder their spreading among communities; some existing works are isolated from each other without comprehensive comparison and discussions. It expects a survey that not only systematically reviews recent advances of biomedical PLMs and their applications but also standardizes terminology and benchmarks. In this paper, we summarize the recent progress of pre-trained LLMs in the biomedical domain and their applications in biomedical downstream tasks. Particularly, we discuss the motivations and propose a taxonomy of existing biomedical PLMs. Their applications in biomedical downstream tasks are exhaustively discussed. At last, we illustrate various limitations and future trends, which we hope can provide inspiration for the future research of the research community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Benyou Wang (109 papers)
  2. Qianqian Xie (60 papers)
  3. Jiahuan Pei (16 papers)
  4. Zhihong Chen (63 papers)
  5. Prayag Tiwari (41 papers)
  6. Zhao Li (109 papers)
  7. Jie Fu (229 papers)
Citations (128)