Integrating Graph Contextualized Knowledge into Pre-trained Language Models (1912.00147v3)
Abstract: Complex node interactions are common in knowledge graphs, and these interactions also contain rich knowledge information. However, traditional methods usually treat a triple as a training unit during the knowledge representation learning (KRL) procedure, neglecting contextualized information of the nodes in knowledge graphs (KGs). We generalize the modeling object to a very general form, which theoretically supports any subgraph extracted from the knowledge graph, and these subgraphs are fed into a novel transformer-based model to learn the knowledge embeddings. To broaden usage scenarios of knowledge, pre-trained LLMs are utilized to build a model that incorporates the learned knowledge representations. Experimental results demonstrate that our model achieves the state-of-the-art performance on several medical NLP tasks, and improvement above TransE indicates that our KRL method captures the graph contextualized information effectively.
- Bin He (58 papers)
- Di Zhou (60 papers)
- Jinghui Xiao (9 papers)
- Qun Liu (230 papers)
- Nicholas Jing Yuan (22 papers)
- Tong Xu (113 papers)
- Xin Jiang (242 papers)