2000 character limit reached
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model (1810.12836v4)
Published 30 Oct 2018 in cs.CL
Abstract: A significant roadblock in multilingual neural LLMing is the lack of labeled non-English data. One potential method for overcoming this issue is learning cross-lingual text representations that can be used to transfer the performance from training on English tasks to non-English tasks, despite little to no task-specific non-English data. In this paper, we explore a natural setup for learning cross-lingual sentence representations: the dual-encoder. We provide a comprehensive evaluation of our cross-lingual representations on a number of monolingual, cross-lingual, and zero-shot/few-shot learning tasks, and also give an analysis of different learned cross-lingual embedding spaces.
- Muthuraman Chidambaram (2 papers)
- Yinfei Yang (73 papers)
- Daniel Cer (28 papers)
- Steve Yuan (5 papers)
- Yun-Hsuan Sung (18 papers)
- Brian Strope (11 papers)
- Ray Kurzweil (11 papers)