Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Multilingual Sentence Embeddings for Parallel Corpus Mining (2105.10419v1)

Published 21 May 2021 in cs.CL

Abstract: Existing models of multilingual sentence embeddings require large parallel data resources which are not available for low-resource languages. We propose a novel unsupervised method to derive multilingual sentence embeddings relying only on monolingual data. We first produce a synthetic parallel corpus using unsupervised machine translation, and use it to fine-tune a pretrained cross-lingual masked LLM (XLM) to derive the multilingual sentence representations. The quality of the representations is evaluated on two parallel corpus mining tasks with improvements of up to 22 F1 points over vanilla XLM. In addition, we observe that a single synthetic bilingual corpus is able to improve results for other language pairs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ivana Kvapilıkova (4 papers)
  2. Mikel Artetxe (52 papers)
  3. Gorka Labaka (15 papers)
  4. Eneko Agirre (53 papers)
  5. Ondřej Bojar (91 papers)
Citations (36)