Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parallel Corpus Filtering via Pre-trained Language Models (2005.06166v1)

Published 13 May 2020 in cs.CL and cs.LG

Abstract: Web-crawled data provides a good source of parallel corpora for training machine translation models. It is automatically obtained, but extremely noisy, and recent work shows that neural machine translation systems are more sensitive to noise than traditional statistical machine translation methods. In this paper, we propose a novel approach to filter out noisy sentence pairs from web-crawled corpora via pre-trained LLMs. We measure sentence parallelism by leveraging the multilingual capability of BERT and use the Generative Pre-training (GPT) LLM as a domain filter to balance data domains. We evaluate the proposed method on the WMT 2018 Parallel Corpus Filtering shared task, and on our own web-crawled Japanese-Chinese parallel corpus. Our method significantly outperforms baselines and achieves a new state-of-the-art. In an unsupervised setting, our method achieves comparable performance to the top-1 supervised method. We also evaluate on a web-crawled Japanese-Chinese parallel corpus that we make publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Boliang Zhang (9 papers)
  2. Ajay Nagesh (7 papers)
  3. Kevin Knight (29 papers)
Citations (29)