2000 character limit reached
LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation (2103.06418v1)
Published 11 Mar 2021 in cs.CL
Abstract: The multilingual pre-trained LLMs (e.g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks. However, these models are computationally intensive and difficult to be deployed on resource-restricted devices. In this paper, we propose a simple yet effective distillation method (LightMBERT) for transferring the cross-lingual generalization ability of the multilingual BERT to a small student model. The experiment results empirically demonstrate the efficiency and effectiveness of LightMBERT, which is significantly better than the baselines and performs comparable to the teacher mBERT.
- Xiaoqi Jiao (8 papers)
- Yichun Yin (27 papers)
- Lifeng Shang (90 papers)
- Xin Jiang (242 papers)
- Xiao Chen (277 papers)
- Linlin Li (31 papers)
- Fang Wang (116 papers)
- Qun Liu (230 papers)