Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study of Cross-Lingual Ability and Language-specific Information in Multilingual BERT (2004.09205v1)

Published 20 Apr 2020 in cs.CL

Abstract: Recently, multilingual BERT works remarkably well on cross-lingual transfer tasks, superior to static non-contextualized word embeddings. In this work, we provide an in-depth experimental study to supplement the existing literature of cross-lingual ability. We compare the cross-lingual ability of non-contextualized and contextualized representation model with the same data. We found that datasize and context window size are crucial factors to the transferability. We also observe the language-specific information in multilingual BERT. By manipulating the latent representations, we can control the output languages of multilingual BERT, and achieve unsupervised token translation. We further show that based on the observation, there is a computationally cheap but effective approach to improve the cross-lingual ability of multilingual BERT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Chi-Liang Liu (9 papers)
  2. Tsung-Yuan Hsu (6 papers)
  3. Yung-Sung Chuang (37 papers)
  4. Hung-yi Lee (325 papers)
Citations (13)