LICHEE: Improving Language Model Pre-training with Multi-grained Tokenization (2108.00801v2)
Abstract: LLM pre-training based on large corpora has achieved tremendous success in terms of constructing enriched contextual representations and has led to significant performance gains on a diverse range of Natural Language Understanding (NLU) tasks. Despite the success, most current pre-trained LLMs, such as BERT, are trained based on single-grained tokenization, usually with fine-grained characters or sub-words, making it hard for them to learn the precise meaning of coarse-grained words and phrases. In this paper, we propose a simple yet effective pre-training method named LICHEE to efficiently incorporate multi-grained information of input text. Our method can be applied to various pre-trained LLMs and improve their representation capability. Extensive experiments conducted on CLUE and SuperGLUE demonstrate that our method achieves comprehensive improvements on a wide variety of NLU tasks in both Chinese and English with little extra inference cost incurred, and that our best ensemble model achieves the state-of-the-art performance on CLUE benchmark competition.
- Weidong Guo (25 papers)
- Mingjun Zhao (13 papers)
- Lusheng Zhang (2 papers)
- Di Niu (67 papers)
- Jinwen Luo (4 papers)
- Zhenhua Liu (47 papers)
- Zhenyang Li (28 papers)
- Jianbo Tang (7 papers)