Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Injecting Knowledge into Biomedical Pre-trained Models via Polymorphism and Synonymous Substitution (2305.15010v1)

Published 24 May 2023 in cs.CL

Abstract: Pre-trained LLMs (PLMs) were considered to be able to store relational knowledge present in the training data. However, some relational knowledge seems to be discarded unsafely in PLMs due to \textbf{report bias}: low-frequency relational knowledge might be underexpressed compared to high-frequency one in PLMs. This gives us a hint that relational knowledge might not be redundant to the stored knowledge of PLMs, but rather be complementary. To additionally inject relational knowledge into PLMs, we propose a simple-yet-effective approach to inject relational knowledge into PLMs, which is inspired by three observations (namely, polymorphism, synonymous substitution, and association). In particular, we switch entities in the training corpus to related entities (either hypernyms/hyponyms/synonyms, or arbitrarily-related concepts). Experimental results show that the proposed approach could not only better capture relational knowledge, but also improve the performance in various biomedical downstream tasks. Our model is available in \url{https://github.com/StevenZHB/BioPLM_InjectingKnowledge}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Hongbo Zhang (54 papers)
  2. Xiang Wan (94 papers)
  3. Benyou Wang (109 papers)
Citations (1)