Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Inheritance for Pre-trained Language Models (2105.13880v2)

Published 28 May 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Recent explorations of large-scale pre-trained LLMs (PLMs) have revealed the power of PLMs with huge amounts of parameters, setting off a wave of training ever-larger PLMs. However, it requires tremendous computational resources to train a large-scale PLM, which may be practically unaffordable. In addition, existing large-scale PLMs are mainly trained from scratch individually, ignoring that many well-trained PLMs are available. To this end, we explore the question how could existing PLMs benefit training large-scale PLMs in future. Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs. Experimental results demonstrate the superiority of KI in training efficiency. We also conduct empirical analyses to explore the effects of teacher PLMs' pre-training settings, including model architecture, pre-training data, etc. Finally, we show that KI could be applied to domain adaptation and knowledge transfer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yujia Qin (41 papers)
  2. Yankai Lin (125 papers)
  3. Jing Yi (11 papers)
  4. Jiajie Zhang (30 papers)
  5. Xu Han (270 papers)
  6. Zhengyan Zhang (46 papers)
  7. Yusheng Su (21 papers)
  8. Zhiyuan Liu (433 papers)
  9. Peng Li (390 papers)
  10. Maosong Sun (337 papers)
  11. Jie Zhou (687 papers)
Citations (44)
Youtube Logo Streamline Icon: https://streamlinehq.com