Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Granularity Optimization for Non-Autoregressive Translation (2210.11017v1)

Published 20 Oct 2022 in cs.CL

Abstract: Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption. This assumption is further strengthened by cross-entropy loss, which encourages a strict match between the hypothesis and the reference token by token. To alleviate this issue, we propose multi-granularity optimization for NAT, which collects model behaviors on translation segments of various granularities and integrates feedback for backpropagation. Experiments on four WMT benchmarks show that the proposed method significantly outperforms the baseline models trained with cross-entropy loss, and achieves the best performance on WMT'16 En-Ro and highly competitive results on WMT'14 En-De for fully non-autoregressive translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yafu Li (26 papers)
  2. Leyang Cui (50 papers)
  3. Yongjing Yin (19 papers)
  4. Yue Zhang (620 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.