Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Correct-and-Memorize: Learning to Translate from Interactive Revisions (1907.03468v2)

Published 8 Jul 2019 in cs.CL

Abstract: State-of-the-art machine translation models are still not on par with human translators. Previous work takes human interactions into the neural machine translation process to obtain improved results in target languages. However, not all model-translation errors are equal -- some are critical while others are minor. In the meanwhile, the same translation mistakes occur repeatedly in a similar context. To solve both issues, we propose CAMIT, a novel method for translating in an interactive environment. Our proposed method works with critical revision instructions, therefore allows human to correct arbitrary words in model-translated sentences. In addition, CAMIT learns from and softly memorizes revision actions based on the context, alleviating the issue of repeating mistakes. Experiments in both ideal and real interactive translation settings demonstrate that our proposed \method enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rongxiang Weng (26 papers)
  2. Hao Zhou (351 papers)
  3. Shujian Huang (106 papers)
  4. Lei Li (1293 papers)
  5. Yifan Xia (14 papers)
  6. Jiajun Chen (125 papers)
Citations (16)