Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation (2112.11640v1)

Published 22 Dec 2021 in cs.CL and cs.AI

Abstract: Recently, non-autoregressive (NAT) models predict outputs in parallel, achieving substantial improvements in generation speed compared to autoregressive (AT) models. While performing worse on raw data, most NAT models are trained as student models on distilled data generated by AT teacher models, which is known as sequence-level Knowledge Distillation. An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data. In this work, we aim to view SDM for NAT models, but find directly adopting SDM to NAT models gains no improvements in terms of translation quality. Through careful analysis, we observe the invalidation is correlated to Modeling Diversity and Confirmation Bias between the AT teacher model and the NAT student models. Based on these findings, we propose an enhanced strategy named SDMRT by adding two stages to classic SDM: one is Pre-Rerank on self-distilled data, the other is Fine-Tune on Filtered teacher-distilled data. Our results outperform baselines by 0.6 to 1.2 BLEU on multiple NAT models. As another bonus, for Iterative Refinement NAT models, our methods can outperform baselines within half iteration number, which means 2X acceleration.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jiaxin Guo (40 papers)
  2. Minghan Wang (23 papers)
  3. Daimeng Wei (31 papers)
  4. Hengchao Shang (22 papers)
  5. Yuxia Wang (41 papers)
  6. Zongyao Li (23 papers)
  7. Zhengzhe Yu (4 papers)
  8. Zhanglin Wu (19 papers)
  9. Yimeng Chen (12 papers)
  10. Chang Su (37 papers)
  11. Min Zhang (630 papers)
  12. Lizhi Lei (3 papers)
  13. Hao Yang (328 papers)
  14. Shimin Tao (31 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.