Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Hyperalignment for Multilingual Word Embeddings (1811.01124v3)

Published 2 Nov 2018 in cs.CL and cs.LG

Abstract: We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This paper extends this line of work to the problem of aligning multiple languages to a common space. A solution is to independently map all languages to a pivot language. Unfortunately, this degrades the quality of indirect word translation. We thus propose a novel formulation that ensures composable mappings, leading to better alignments. We evaluate our method by jointly aligning word vectors in eleven languages, showing consistent improvement with indirect mappings while maintaining competitive performance on direct word translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jean Alaux (1 paper)
  2. Edouard Grave (56 papers)
  3. Marco Cuturi (93 papers)
  4. Armand Joulin (81 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.