Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extracting Biomedical Factual Knowledge Using Pretrained Language Model and Electronic Health Record Context (2209.07859v2)

Published 26 Aug 2022 in cs.IR, cs.AI, and cs.LG

Abstract: LLMs (LMs) have performed well on biomedical natural language processing applications. In this study, we conducted some experiments to use prompt methods to extract knowledge from LMs as new knowledge Bases (LMs as KBs). However, prompting can only be used as a low bound for knowledge extraction, and perform particularly poorly on biomedical domain KBs. In order to make LMs as KBs more in line with the actual application scenarios of the biomedical domain, we specifically add EHR notes as context to the prompt to improve the low bound in the biomedical domain. We design and validate a series of experiments for our Dynamic-Context-BioLAMA task. Our experiments show that the knowledge possessed by those LLMs can distinguish the correct knowledge from the noise knowledge in the EHR notes, and such distinguishing ability can also be used as a new metric to evaluate the amount of knowledge possessed by the model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zonghai Yao (33 papers)
  2. Yi Cao (68 papers)
  3. Zhichao Yang (37 papers)
  4. Vijeta Deshpande (6 papers)
  5. Hong Yu (114 papers)
Citations (18)