Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context (2209.07859v2)
Abstract: LLMs (LMs) have performed well on biomedical natural language processing applications. In this study, we conducted some experiments to use prompt methods to extract knowledge from LMs as new knowledge Bases (LMs as KBs). However, prompting can only be used as a low bound for knowledge extraction, and perform particularly poorly on biomedical domain KBs. In order to make LMs as KBs more in line with the actual application scenarios of the biomedical domain, we specifically add EHR notes as context to the prompt to improve the low bound in the biomedical domain. We design and validate a series of experiments for our Dynamic-Context-BioLAMA task. Our experiments show that the knowledge possessed by those LLMs can distinguish the correct knowledge from the noise knowledge in the EHR notes, and such distinguishing ability can also be used as a new metric to evaluate the amount of knowledge possessed by the model.
- Zonghai Yao (33 papers)
- Yi Cao (68 papers)
- Zhichao Yang (37 papers)
- Vijeta Deshpande (6 papers)
- Hong Yu (114 papers)