Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BJTU-WeChat's Systems for the WMT22 Chat Translation Task (2211.15009v1)

Published 28 Nov 2022 in cs.CL

Abstract: This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German. Based on the Transformer, we apply several effective variants. In our experiments, we utilize the pre-training-then-fine-tuning paradigm. In the first pre-training stage, we employ data filtering and synthetic data generation (i.e., back-translation, forward-translation, and knowledge distillation). In the second fine-tuning stage, we investigate speaker-aware in-domain data generation, speaker adaptation, prompt-based context modeling, target denoising fine-tuning, and boosted self-COMET-based model ensemble. Our systems achieve 0.810 and 0.946 COMET scores. The COMET scores of English-German and German-English are the highest among all submissions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yunlong Liang (33 papers)
  2. Fandong Meng (174 papers)
  3. Jinan Xu (64 papers)
  4. Yufeng Chen (58 papers)
  5. Jie Zhou (687 papers)
Citations (2)