2000 character limit reached
TransQuest at WMT2020: Sentence-Level Direct Assessment (2010.05318v1)
Published 11 Oct 2020 in cs.CL
Abstract: This paper presents the team TransQuest's participation in Sentence-Level Direct Assessment shared task in WMT 2020. We introduce a simple QE framework based on cross-lingual transformers, and we use it to implement and evaluate two different neural architectures. The proposed methods achieve state-of-the-art results surpassing the results obtained by OpenKiwi, the baseline used in the shared task. We further fine tune the QE framework by performing ensemble and data augmentation. Our approach is the winning solution in all of the language pairs according to the WMT 2020 official results.
- Tharindu Ranasinghe (52 papers)
- Constantin Orasan (33 papers)
- Ruslan Mitkov (15 papers)