KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning (2012.03551v1)
Abstract: Recent studies on pre-trained LLMs have demonstrated their ability to capture factual knowledge and applications in knowledge-aware downstream tasks. In this work, we present a LLM pre-training framework guided by factual knowledge completion and verification, and use the generative and discriminative approaches cooperatively to learn the model. Particularly, we investigate two learning schemes, named two-tower scheme and pipeline scheme, in training the generator and discriminator with shared parameter. Experimental results on LAMA, a set of zero-shot cloze-style question answering tasks, show that our model contains richer factual knowledge than the conventional pre-trained LLMs. Furthermore, when fine-tuned and evaluated on the MRQA shared tasks which consists of several machine reading comprehension datasets, our model achieves the state-of-the-art performance, and gains large improvements on NewsQA (+1.26 F1) and TriviaQA (+1.56 F1) over RoBERTa.
- Bin He (58 papers)
- Xin Jiang (242 papers)
- Jinghui Xiao (9 papers)
- Qun Liu (230 papers)