Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Glancing Transformer for Non-Autoregressive Neural Machine Translation (2008.07905v3)

Published 18 Aug 2020 in cs.CL

Abstract: Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing LLM (GLM), a method to learn word interdependency for single-pass parallel generation models. With GLM, we develop Glancing Transformer (GLAT) for machine translation. With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8-15 times speedup. Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lihua Qian (8 papers)
  2. Hao Zhou (351 papers)
  3. Yu Bao (36 papers)
  4. Mingxuan Wang (83 papers)
  5. Lin Qiu (47 papers)
  6. Weinan Zhang (322 papers)
  7. Yong Yu (219 papers)
  8. Lei Li (1293 papers)
Citations (152)