Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm (2110.08190v4)
Abstract: Conventional wisdom in pruning Transformer-based LLMs is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit. However, under the trending pretrain-and-finetune paradigm, we postulate a counter-traditional hypothesis, that is: pruning increases the risk of overfitting when performed at the fine-tuning phase. In this paper, we aim to address the overfitting problem and improve pruning performance via progressive knowledge distillation with error-bound properties. We show for the first time that reducing the risk of overfitting can help the effectiveness of pruning under the pretrain-and-finetune paradigm. Ablation studies and experiments on the GLUE benchmark show that our method outperforms the leading competitors across different tasks.
- Shaoyi Huang (19 papers)
- Dongkuan Xu (43 papers)
- Ian E. H. Yen (8 papers)
- Yijue Wang (6 papers)
- Bingbing Li (24 papers)
- Shiyang Chen (23 papers)
- Mimi Xie (14 papers)
- Sanguthevar Rajasekaran (21 papers)
- Hang Liu (135 papers)
- Caiwen Ding (98 papers)
- Sung-En Chang (10 papers)