Towards a More Inclusive AI: Progress and Perspectives in Large Language Model Training for the Sámi Language (2405.05777v1)
Abstract: S\'ami, an indigenous language group comprising multiple languages, faces digital marginalization due to the limited availability of data and sophisticated LLMs designed for its linguistic intricacies. This work focuses on increasing technological participation for the S\'ami language. We draw the attention of the ML community towards the LLMing problem of Ultra Low Resource (ULR) languages. ULR languages are those for which the amount of available textual resources is very low, and the speaker count for them is also very low. ULRLs are also not supported by mainstream LLMs like ChatGPT, due to which gathering artificial training data for them becomes even more challenging. Mainstream AI foundational model development has given less attention to this category of languages. Generally, these languages have very few speakers, making it hard to find them. However, it is important to develop foundational models for these ULR languages to promote inclusion and the tangible abilities and impact of LLMs. To this end, we have compiled the available S\'ami language resources from the web to create a clean dataset for training LLMs. In order to study the behavior of modern LLM models with ULR languages (S\'ami), we have experimented with different kinds of LLMs, mainly at the order of $\sim$ seven billion parameters. We have also explored the effect of multilingual LLM training for ULRLs. We found that the decoder-only models under a sequential multilingual training scenario perform better than joint multilingual training, whereas multilingual training with high semantic overlap, in general, performs better than training from scratch.This is the first study on the S\'ami language for adapting non-statistical LLMs that use the latest developments in the field of NLP.
- Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
- Anthropic. Introducing the next generation of claude. https://www.anthropic.com/news/claude-3-family, 2024. Anthropic, Accessed: April 29, 2024.
- On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
- How do languages influence each other? studying cross-lingual data sharing during llm fine-tuning. arXiv preprint arXiv:2305.13286, 2023.
- Quac: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, 2018.
- Teenytinyllama: open-source tiny language models trained in brazilian portuguese. arXiv preprint arXiv:2401.16640, 2024.
- Sámi dieđalaš áigečála. Sámi dieđalaš áigečála. https://site.uit.no/aigecala/, 2024. Accessed: April 29, 2024.
- Translating a low-resource language using gpt-3 and a human-readable dictionary. In Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 1–13, 2023.
- Ethnologue. Languages of the world. https://www.ethnologue.com/25/language/sme/, 2022. Ethnologue 25th edition, Accessed: April 29, 2024.
- Ethnologue. Languages of the world. https://www.ethnologue.com/25/language/smj/, 2022. Ethnologue 25th edition, Accessed: April 29, 2024.
- Giellatekno. Giellatekno, research group for saami language technology. https://giellatekno.uit.no/index.html, 2024. Accessed: April 29, 2024.
- Mika Hämäläinen. Uralicnlp: An nlp library for uralic languages. Journal of open source software, 4(37):1345, 2019.
- Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2021.
- Recent advances in apertium, a free/open-source rule-based machine translation platform for low-resource languages. Machine Translation, 35(4):475–502, 2021.
- Llms in the loop: Leveraging large language model annotations for active learning in low-resource languages. arXiv preprint arXiv:2404.02261, 2024.
- Building a llama2-finetuned llm for odia language utilizing domain knowledge instruction set. arXiv preprint arXiv:2312.12624, 2023.
- adaptmllm: Fine-tuning multilingual language models on low-resource languages with integrated llm playgrounds. Information, 14(12):638, 2023.
- Fingpt: Large generative models for a small language. arXiv preprint arXiv:2311.05640, 2023.
- Christopher Moseley. Atlas of the World’s Languages in Danger. Unesco, 2010.
- A comprehensive overview of large language models. arXiv preprint arXiv:2307.06435, 2023.
- Semi-automatic semantic annotation of pubmed queries: a study on quality, efficiency, satisfaction. Journal of biomedical informatics, 44(2):310–318, 2011.
- Democratizing llms for low-resource languages by leveraging their english dominant abilities with linguistically-diverse prompts. arXiv preprint arXiv:2306.11372, 2023.
- Seallms–large language models for southeast asia. arXiv preprint arXiv:2312.00738, 2023.
- Chatgpt mt: Competitive for high-(but not low-) resource languages. arXiv preprint arXiv:2309.07423, 2023.
- Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
- Teaching unseen low-resource languages to large translation models. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 375–380, 2022.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
- Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
- Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International conference on machine learning, pages 11328–11339. PMLR, 2020.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.