Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TiBERT: Tibetan Pre-trained Language Model (2205.07303v1)

Published 15 May 2022 in cs.CL and cs.AI

Abstract: The pre-trained LLM is trained on large-scale unlabeled text and can achieve state-of-the-art results in many different downstream tasks. However, the current pre-trained LLM is mainly concentrated in the Chinese and English fields. For low resource language such as Tibetan, there is lack of a monolingual pre-trained model. To promote the development of Tibetan natural language processing tasks, this paper collects the large-scale training data from Tibetan websites and constructs a vocabulary that can cover 99.95$\%$ of the words in the corpus by using Sentencepiece. Then, we train the Tibetan monolingual pre-trained LLM named TiBERT on the data and vocabulary. Finally, we apply TiBERT to the downstream tasks of text classification and question generation, and compare it with classic models and multilingual pre-trained models, the experimental results show that TiBERT can achieve the best performance. Our model is published in http://tibert.cmli-nlp.com/

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuan Sun (117 papers)
  2. Sisi Liu (3 papers)
  3. Junjie Deng (2 papers)
  4. Xiaobing Zhao (7 papers)
Citations (9)