Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models (2010.06189v3)

Published 13 Oct 2020 in cs.CL

Abstract: LLMs (LMs) have proven surprisingly successful at capturing factual knowledge by completing cloze-style fill-in-the-blank questions such as "Punta Cana is located in _." However, while knowledge is both written and queried in many languages, studies on LMs' factual representation ability have almost invariably been performed on English. To assess factual knowledge retrieval in LMs in different languages, we create a multilingual benchmark of cloze-style probes for 23 typologically diverse languages. To properly handle language variations, we expand probing methods from single- to multi-word entities, and develop several decoding algorithms to generate multi-token predictions. Extensive experimental results provide insights about how well (or poorly) current state-of-the-art LMs perform at this task in languages with more or fewer available resources. We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages. Benchmark data and code have been released at https://x-factr.github.io.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhengbao Jiang (25 papers)
  2. Antonios Anastasopoulos (111 papers)
  3. Jun Araki (11 papers)
  4. Haibo Ding (11 papers)
  5. Graham Neubig (342 papers)
Citations (130)
Github Logo Streamline Icon: https://streamlinehq.com