Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DirectQE: Direct Pretraining for Machine Translation Quality Estimation (2105.07149v1)

Published 15 May 2021 in cs.CL

Abstract: Machine Translation Quality Estimation (QE) is a task of predicting the quality of machine translations without relying on any reference. Recently, the predictor-estimator framework trains the predictor as a feature extractor, which leverages the extra parallel corpora without QE labels, achieving promising QE performance. However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly. We propose a novel framework called DirectQE that provides a direct pretraining for QE tasks. In DirectQE, a generator is trained to produce pseudo data that is closer to the real QE data, and a detector is pretrained on these data with novel objectives that are akin to the QE task. Experiments on widely used benchmarks show that DirectQE outperforms existing methods, without using any pretraining models such as BERT. We also give extensive analyses showing how fixing the two gaps contributes to our improvements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qu Cui (3 papers)
  2. Shujian Huang (106 papers)
  3. Jiahuan Li (10 papers)
  4. Xiang Geng (13 papers)
  5. Zaixiang Zheng (25 papers)
  6. Guoping Huang (17 papers)
  7. Jiajun Chen (125 papers)
Citations (23)