Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diformer: Directional Transformer for Neural Machine Translation (2112.11632v2)

Published 22 Dec 2021 in cs.CL and cs.AI

Abstract: Autoregressive (AR) and Non-autoregressive (NAR) models have their own superiority on the performance and latency, combining them into one model may take advantage of both. Current combination frameworks focus more on the integration of multiple decoding paradigms with a unified generative model, e.g. Masked LLM. However, the generalization can be harmful to the performance due to the gap between training objective and inference. In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework. Specifically, we propose the Directional Transformer (Diformer) by jointly modelling AR and NAR into three generation directions (left-to-right, right-to-left and straight) with a newly introduced direction variable, which works by controlling the prediction of each token to have specific dependencies under that direction. The unification achieved by direction successfully preserves the original dependency assumption used in AR and NAR, retaining both generalization and performance. Experiments on 4 WMT benchmarks demonstrate that Diformer outperforms current united-modelling works with more than 1.5 BLEU points for both AR and NAR decoding, and is also competitive to the state-of-the-art independent AR and NAR models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Minghan Wang (23 papers)
  2. Jiaxin Guo (40 papers)
  3. Yuxia Wang (41 papers)
  4. Daimeng Wei (31 papers)
  5. Hengchao Shang (22 papers)
  6. Chang Su (37 papers)
  7. Yimeng Chen (12 papers)
  8. Yinglu Li (6 papers)
  9. Min Zhang (630 papers)
  10. Shimin Tao (31 papers)
  11. Hao Yang (328 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.