Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding (2210.08536v1)

Published 16 Oct 2022 in cs.CL

Abstract: Knowledge-enhanced Pre-trained LLM (PLM) has recently received significant attention, which aims to incorporate factual knowledge into PLMs. However, most existing methods modify the internal structures of fixed types of PLMs by stacking complicated modules, and introduce redundant and irrelevant factual knowledge from knowledge bases (KBs). In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM. This framework can be flexibly combined with existing mainstream PLMs. Specifically, we first construct a knowledge sub-graph from KBs for each context. Then we design multiple continuous prompts rules and transform the knowledge sub-graph into natural language prompts. To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks including prompt relevance inspection and masked prompt modeling. Extensive experiments on multiple natural language understanding (NLU) tasks show the superiority of KP-PLM over other state-of-the-art methods in both full-resource and low-resource settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jianing Wang (50 papers)
  2. Wenkang Huang (2 papers)
  3. Qiuhui Shi (4 papers)
  4. Hongbin Wang (38 papers)
  5. Minghui Qiu (58 papers)
  6. Xiang Li (1002 papers)
  7. Ming Gao (95 papers)
Citations (12)