Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Biomedical Pretrained Language Models with Knowledge (2104.10344v1)

Published 21 Apr 2021 in cs.CL

Abstract: Pretrained LLMs have shown success in many natural language processing tasks. Many works explore incorporating knowledge into LLMs. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained LLM that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware LLM that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zheng Yuan (117 papers)
  2. Yijia Liu (19 papers)
  3. Chuanqi Tan (56 papers)
  4. Songfang Huang (51 papers)
  5. Fei Huang (408 papers)
Citations (87)