Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Alibaba-Translate China's Submission for WMT 2022 Metrics Shared Task (2210.09683v2)

Published 18 Oct 2022 in cs.CL

Abstract: In this report, we present our submission to the WMT 2022 Metrics Shared Task. We build our system based on the core idea of UNITE (Unified Translation Evaluation), which unifies source-only, reference-only, and source-reference-combined evaluation scenarios into one single model. Specifically, during the model pre-training phase, we first apply the pseudo-labeled data examples to continuously pre-train UNITE. Notably, to reduce the gap between pre-training and fine-tuning, we use data cropping and a ranking-based score normalization strategy. During the fine-tuning phase, we use both Direct Assessment (DA) and Multidimensional Quality Metrics (MQM) data from past years' WMT competitions. Specially, we collect the results from models with different pre-trained LLM backbones, and use different ensembling strategies for involved translation directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yu Wan (18 papers)
  2. Keqin Bao (21 papers)
  3. Dayiheng Liu (75 papers)
  4. Baosong Yang (57 papers)
  5. Derek F. Wong (69 papers)
  6. Lidia S. Chao (41 papers)
  7. Wenqiang Lei (66 papers)
  8. Jun Xie (66 papers)
Citations (9)