Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression (2110.08551v1)

Published 16 Oct 2021 in cs.CL, cs.AI, and cs.LG

Abstract: On many natural language processing tasks, large pre-trained LLMs (PLMs) have shown overwhelming performances compared with traditional neural network methods. Nevertheless, their huge model size and low inference speed have hindered the deployment on resource-limited devices in practice. In this paper, we target to compress PLMs with knowledge distillation, and propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information. Specifically, to enhance the model capability and transferability, we leverage the idea of meta-learning and set up domain-relational graphs to capture the relational information across different domains. And to dynamically select the most representative prototypes for each domain, we propose a hierarchical compare-aggregate mechanism to capture hierarchical relationships. Extensive experiments on public multi-domain datasets demonstrate the superior performance of our HRKD method as well as its strong few-shot learning ability. For reproducibility, we release the code at https://github.com/cheneydon/hrkd.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chenhe Dong (7 papers)
  2. Yaliang Li (117 papers)
  3. Ying Shen (76 papers)
  4. Minghui Qiu (58 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.