Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on User-Generated Contents (2011.02121v1)

Published 4 Nov 2020 in cs.CL

Abstract: Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input, such as text from the news domain. However, existing studies suggest that NMT still struggles with certain kinds of input with considerable noise, such as User-Generated Contents (UGC) on the Internet. To make better use of NMT for cross-cultural communication, one of the most promising directions is to develop a model that correctly handles these expressions. Though its importance has been recognized, it is still not clear as to what creates the great gap in performance between the translation of clean input and that of UGC. To answer the question, we present a new dataset, PheMT, for evaluating the robustness of MT systems against specific linguistic phenomena in Japanese-English translation. Our experiments with the created dataset revealed that not only our in-house models but even widely used off-the-shelf systems are greatly disturbed by the presence of certain phenomena.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ryo Fujii (14 papers)
  2. Masato Mita (19 papers)
  3. Kaori Abe (9 papers)
  4. Kazuaki Hanawa (5 papers)
  5. Makoto Morishita (20 papers)
  6. Jun Suzuki (86 papers)
  7. Kentaro Inui (119 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.