Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages? (2112.12356v1)

Published 23 Dec 2021 in cs.CL

Abstract: During the past several years, a surge of multi-lingual Pre-trained LLMs (PLMs) has been proposed to achieve state-of-the-art performance in many cross-lingual downstream tasks. However, the understanding of why multi-lingual PLMs perform well is still an open domain. For example, it is unclear whether multi-Lingual PLMs reveal consistent token attributions in different languages. To address this, in this paper, we propose a Cross-lingual Consistency of Token Attributions (CCTA) evaluation framework. Extensive experiments in three downstream tasks demonstrate that multi-lingual PLMs assign significantly different attributions to multi-lingual synonyms. Moreover, we have the following observations: 1) the Spanish achieves the most consistent token attributions in different languages when it is used for training PLMs; 2) the consistency of token attributions strongly correlates with performance in downstream tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Junxiang Wang (35 papers)
  2. Xuchao Zhang (44 papers)
  3. Bo Zong (13 papers)
  4. Yanchi Liu (41 papers)
  5. Wei Cheng (175 papers)
  6. Jingchao Ni (27 papers)
  7. Haifeng Chen (99 papers)
  8. Liang Zhao (353 papers)