Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models (2406.16135v1)

Published 23 Jun 2024 in cs.CL and cs.LG

Abstract: LLMs are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, effectively being crosslingual? This study evaluates six state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz) contexts. We observe that simple inference-time mitigation methods offer only limited improvement. On the other hand, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is publicly available at https://github.com/google-research/crosslingual-knowledge-barriers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Lynn Chua (16 papers)
  2. Badih Ghazi (78 papers)
  3. Yangsibo Huang (40 papers)
  4. Pritish Kamath (48 papers)
  5. Ravi Kumar (146 papers)
  6. Pasin Manurangsi (127 papers)
  7. Amer Sinha (11 papers)
  8. Chulin Xie (27 papers)
  9. Chiyuan Zhang (57 papers)