Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Preordered RNN Layer Boosts Neural Machine Translation in Low Resource Settings (2112.13960v1)

Published 28 Dec 2021 in cs.CL

Abstract: Neural Machine Translation (NMT) models are strong enough to convey semantic and syntactic information from the source language to the target language. However, these models are suffering from the need for a large amount of data to learn the parameters. As a result, for languages with scarce data, these models are at risk of underperforming. We propose to augment attention based neural network with reordering information to alleviate the lack of data. This augmentation improves the translation quality for both English to Persian and Persian to English by up to 6% BLEU absolute over the baseline models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Mohaddeseh Bastan (6 papers)
  2. Shahram Khadivi (29 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.