Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Autocorrect in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation (2103.16189v2)

Published 30 Mar 2021 in cs.CL

Abstract: Automatic translation of dialogue texts is a much needed demand in many real life scenarios. However, the currently existing neural machine translation delivers unsatisfying results. In this paper, we conduct a deep analysis of a dialogue corpus and summarize three major issues on dialogue translation, including pronoun dropping (\droppro), punctuation dropping (\droppun), and typos (\typo). In response to these challenges, we propose a joint learning method to identify omission and typo, and utilize context to translate dialogue utterances. To properly evaluate the performance, we propose a manually annotated dataset with 1,931 Chinese-English parallel utterances from 300 dialogues as a benchmark testbed for dialogue translation. Our experiments show that the proposed method improves translation quality by 3.2 BLEU over the baselines. It also elevates the recovery rate of omitted pronouns from 26.09% to 47.16%. We will publish the code and dataset publicly at https://github.com/rgwt123/DialogueMT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tao Wang (700 papers)
  2. Chengqi Zhao (15 papers)
  3. Mingxuan Wang (83 papers)
  4. Lei Li (1293 papers)
  5. Deyi Xiong (103 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.