Glancing Transformer for Non-Autoregressive Neural Machine Translation (2008.07905v3)
Abstract: Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing LLM (GLM), a method to learn word interdependency for single-pass parallel generation models. With GLM, we develop Glancing Transformer (GLAT) for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup. Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points.
- Lihua Qian (8 papers)
- Hao Zhou (351 papers)
- Yu Bao (36 papers)
- Mingxuan Wang (83 papers)
- Lin Qiu (47 papers)
- Weinan Zhang (322 papers)
- Yong Yu (219 papers)
- Lei Li (1293 papers)