Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Plug-and-Play Knowledge Injection for Pre-trained Language Models (2305.17691v2)

Published 28 May 2023 in cs.CL

Abstract: Injecting external knowledge can improve the performance of pre-trained LLMs (PLMs) on various downstream NLP tasks. However, massive retraining is required to deploy new knowledge injection methods or knowledge bases for downstream tasks. In this work, we are the first to study how to improve the flexibility and efficiency of knowledge injection by reusing existing downstream models. To this end, we explore a new paradigm plug-and-play knowledge injection, where knowledge bases are injected into frozen existing downstream models by a knowledge plugin. Correspondingly, we propose a plug-and-play injection method map-tuning, which trains a mapping of knowledge embeddings to enrich model inputs with mapped embeddings while keeping model parameters frozen. Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models. Moreover, we show that a frozen downstream model can be well adapted to different domains with different mapping networks of domain knowledge. Our code and models are available at https://github.com/THUNLP/Knowledge-Plugin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhengyan Zhang (46 papers)
  2. Zhiyuan Zeng (23 papers)
  3. Yankai Lin (125 papers)
  4. Huadong Wang (15 papers)
  5. Deming Ye (10 papers)
  6. Chaojun Xiao (39 papers)
  7. Xu Han (270 papers)
  8. Zhiyuan Liu (433 papers)
  9. Peng Li (390 papers)
  10. Maosong Sun (337 papers)
  11. Jie Zhou (687 papers)
Citations (7)