Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy (1811.00275v1)

Published 1 Nov 2018 in cs.CL

Abstract: Cross-lingual word embeddings aim to capture common linguistic regularities of different languages, which benefit various downstream tasks ranging from machine translation to transfer learning. Recently, it has been shown that these embeddings can be effectively learned by aligning two disjoint monolingual vector spaces through a linear transformation (word mapping). In this work, we focus on learning such a word mapping without any supervision signal. Most previous work of this task adopts parametric metrics to measure distribution differences, which typically requires a sophisticated alternate optimization process, either in the form of \emph{minmax game} or intermediate \emph{density estimation}. This alternate optimization process is relatively hard and unstable. In order to avoid such sophisticated alternate optimization, we propose to learn unsupervised word mapping by directly maximizing the mean discrepancy between the distribution of transferred embedding and target embedding. Extensive experimental results show that our proposed model outperforms competitive baselines by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pengcheng Yang (28 papers)
  2. Fuli Luo (23 papers)
  3. Shuangzhi Wu (29 papers)
  4. Jingjing Xu (80 papers)
  5. Dongdong Zhang (79 papers)
  6. Xu Sun (194 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.