Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Cross-lingual Word Embedding by Multilingual Neural Language Models (1809.02306v1)

Published 7 Sep 2018 in cs.CL, cs.AI, and cs.LG

Abstract: We propose an unsupervised method to obtain cross-lingual embeddings without any parallel data or pre-trained word embeddings. The proposed model, which we call multilingual neural LLMs, takes sentences of multiple languages as an input. The proposed model contains bidirectional LSTMs that perform as forward and backward LLMs, and these networks are shared among all the languages. The other parameters, i.e. word embeddings and linear transformation between hidden states and outputs, are specific to each language. The shared LSTMs can capture the common sentence structure among all languages. Accordingly, word embeddings of each language are mapped into a common latent space, making it possible to measure the similarity of words across multiple languages. We evaluate the quality of the cross-lingual word embeddings on a word alignment task. Our experiments demonstrate that our model can obtain cross-lingual embeddings of much higher quality than existing unsupervised models when only a small amount of monolingual data (i.e. 50k sentences) are available, or the domains of monolingual data are different across languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Takashi Wada (10 papers)
  2. Tomoharu Iwata (64 papers)
Citations (26)