A Survey of Knowledge Enhanced Pre-trained Models (2110.00269v5)
Abstract: Pre-trained LLMs learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of NLP after fine-tuning. These models, however, suffer from poor robustness and lack of interpretability. We refer to pre-trained LLMs with knowledge injection as knowledge-enhanced pre-trained LLMs (KEPLMs). These models demonstrate deep understanding and logical reasoning and introduce interpretability. In this survey, we provide a comprehensive overview of KEPLMs in NLP. We first discuss the advancements in pre-trained LLMs and knowledge representation learning. Then we systematically categorize existing KEPLMs from three different perspectives. Finally, we outline some potential directions of KEPLMs for future research.
- Jian Yang (503 papers)
- Xinyu Hu (32 papers)
- Gang Xiao (18 papers)
- Yulong Shen (47 papers)