Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-Parametric Online Learning from Human Feedback for Neural Machine Translation (2109.11136v3)

Published 23 Sep 2021 in cs.CL

Abstract: We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system. However, previous methods require online model updating or additional translation memory networks to achieve high-quality performance, making them inflexible and inefficient in practice. In this paper, we propose a novel non-parametric online learning method without changing the model structure. This approach introduces two k-nearest-neighbor (knn) modules: one module memorizes the human feedback, which is the correct sentences provided by human translators, while the other balances the usage of the history human feedback and original NMT models adaptively. Experiments conducted on EMEA and JRC-Acquis benchmarks demonstrate that our proposed method obtains substantial improvements on translation accuracy and achieves better adaptation performance with less repeating human correction operations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dongqi Wang (8 papers)
  2. Haoran Wei (55 papers)
  3. Zhirui Zhang (46 papers)
  4. Shujian Huang (106 papers)
  5. Jun Xie (66 papers)
  6. Jiajun Chen (125 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.