Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Specializing Multilingual Language Models: An Empirical Study (2106.09063v4)

Published 16 Jun 2021 in cs.CL

Abstract: Pretrained multilingual LLMs have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: vocabulary augmentation and script transliteration. Our evaluations on part-of-speech tagging, universal dependency parsing, and named entity recognition in nine diverse low-resource languages uphold the viability of these approaches while raising new questions around how to optimally adapt multilingual models to low-resource settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ethan C. Chau (5 papers)
  2. Noah A. Smith (224 papers)
Citations (26)