Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Chameleon: Transformation analysis between languages using Cross-lingual Post-training based on Pre-trained language models (2209.06422v1)

Published 14 Sep 2022 in cs.CL

Abstract: As pre-trained LLMs become more resource-demanding, the inequality between resource-rich languages such as English and resource-scarce languages is worsening. This can be attributed to the fact that the amount of available training data in each language follows the power-law distribution, and most of the languages belong to the long tail of the distribution. Some research areas attempt to mitigate this problem. For example, in cross-lingual transfer learning and multilingual training, the goal is to benefit long-tail languages via the knowledge acquired from resource-rich languages. Although being successful, existing work has mainly focused on experimenting on as many languages as possible. As a result, targeted in-depth analysis is mostly absent. In this study, we focus on a single low-resource language and perform extensive evaluation and probing experiments using cross-lingual post-training (XPT). To make the transfer scenario challenging, we choose Korean as the target language, as it is a language isolate and thus shares almost no typology with English. Results show that XPT not only outperforms or performs on par with monolingual models trained with orders of magnitudes more data but also is highly efficient in the transfer process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Suhyune Son (4 papers)
  2. Chanjun Park (49 papers)
  3. Jungseob Lee (8 papers)
  4. Midan Shim (3 papers)
  5. Chanhee Lee (14 papers)
  6. Yoonna Jang (9 papers)
  7. Jaehyung Seo (15 papers)
  8. Heuiseok Lim (49 papers)