Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Predicting the Performance of Multilingual NLP Models (2110.08875v1)

Published 17 Oct 2021 in cs.CL and cs.LG

Abstract: Recent advancements in NLP have given us models like mBERT and XLMR that can serve over 100 languages. The languages that these models are evaluated on, however, are very few in number, and it is unlikely that evaluation datasets will cover all the languages that these models support. Potential solutions to the costly problem of dataset creation are to translate datasets to new languages or use template-filling based techniques for creation. This paper proposes an alternate solution for evaluating a model across languages which make use of the existing performance scores of the model on languages that a particular task has test sets for. We train a predictor on these performance scores and use this predictor to predict the model's performance in different evaluation settings. Our results show that our method is effective in filling the gaps in the evaluation for an existing set of languages, but might require additional improvements if we want it to generalize to unseen languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anirudh Srinivasan (9 papers)
  2. Sunayana Sitaram (54 papers)
  3. Tanuja Ganu (22 papers)
  4. Sandipan Dandapat (17 papers)
  5. Kalika Bali (27 papers)
  6. Monojit Choudhury (66 papers)
Citations (25)