Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Context Selection for Document-level Neural Machine Translation via Reinforcement Learning (2010.04314v1)

Published 9 Oct 2020 in cs.CL

Abstract: Document-level neural machine translation has yielded attractive improvements. However, majority of existing methods roughly use all context sentences in a fixed scope. They neglect the fact that different source sentences need different sizes of context. To address this problem, we propose an effective approach to select dynamic context so that the document-level translation model can utilize the more useful selected context sentences to produce better translations. Specifically, we introduce a selection module that is independent of the translation module to score each candidate context sentence. Then, we propose two strategies to explicitly select a variable number of context sentences and feed them into the translation module. We train the two modules end-to-end via reinforcement learning. A novel reward is proposed to encourage the selection and utilization of dynamic context sentences. Experiments demonstrate that our approach can select adaptive context sentences for different source sentences, and significantly improves the performance of document-level translation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiaomian Kang (2 papers)
  2. Yang Zhao (382 papers)
  3. Jiajun Zhang (176 papers)
  4. Chengqing Zong (65 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.