TQCompressor: improving tensor decomposition methods in neural networks via permutations (2401.16367v1)
Abstract: We introduce TQCompressor, a novel method for neural network model compression with improved tensor decompositions. We explore the challenges posed by the computational and storage demands of pre-trained LLMs in NLP tasks and propose a permutation-based enhancement to Kronecker decomposition. This enhancement makes it possible to reduce loss in model expressivity which is usually associated with factorization. We demonstrate this method applied to the GPT-2${small}$. The result of the compression is TQCompressedGPT-2 model, featuring 81 mln. parameters compared to 124 mln. in the GPT-2${small}$. We make TQCompressedGPT-2 publicly available. We further enhance the performance of the TQCompressedGPT-2 through a training strategy involving multi-step knowledge distillation, using only a 3.1% of the OpenWebText. TQCompressedGPT-2 surpasses DistilGPT-2 and KnGPT-2 in comparative evaluations, marking an advancement in the efficient and effective deployment of models in resource-constrained environments.
- V. Abronin (2 papers)
- A. Naumov (8 papers)
- D. Mazur (3 papers)
- D. Bystrov (1 paper)
- K. Tsarova (1 paper)
- Ar. Melnikov (4 papers)
- I. Oseledets (5 papers)
- S. Dolgov (2 papers)
- R. Brasher (1 paper)
- M. Perelshtein (3 papers)