Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DePA: Improving Non-autoregressive Machine Translation with Dependency-Aware Decoder (2203.16266v2)

Published 30 Mar 2022 in cs.CL

Abstract: Non-autoregressive machine translation (NAT) models have lower translation quality than autoregressive translation (AT) models because NAT decoders do not depend on previous target tokens in the decoder input. We propose a novel and general Dependency-Aware Decoder (DePA) to enhance target dependency modeling in the decoder of fully NAT models from two perspectives: decoder self-attention and decoder input. First, we propose an autoregressive forward-backward pre-training phase before NAT training, which enables the NAT decoder to gradually learn bidirectional target dependencies for the final NAT training. Second, we transform the decoder input from the source language representation space to the target language representation space through a novel attentive transformation process, which enables the decoder to better capture target dependencies. DePA can be applied to any fully NAT models. Extensive experiments show that DePA consistently improves highly competitive and state-of-the-art fully NAT models on widely used WMT and IWSLT benchmarks by up to 1.88 BLEU gain, while maintaining the inference latency comparable to other fully NAT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiaao Zhan (2 papers)
  2. Qian Chen (264 papers)
  3. Boxing Chen (67 papers)
  4. Wen Wang (144 papers)
  5. Yu Bai (136 papers)
  6. Yang Gao (761 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.