Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages (2411.14343v1)

Published 21 Nov 2024 in cs.CL and cs.AI
UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages

Abstract: LLMs under-perform on low-resource languages due to limited training data. We present a method to efficiently collect text data for low-resource languages from the entire Common Crawl corpus. Our approach, UnifiedCrawl, filters and extracts common crawl using minimal compute resources, yielding mono-lingual datasets much larger than previously available sources. We demonstrate that leveraging this data to fine-tuning multilingual LLMs via efficient adapter methods (QLoRA) significantly boosts performance on the low-resource language, while minimizing VRAM usage. Our experiments show large improvements in LLMing perplexity and an increase in few-shot prompting scores. Our work and released source code provide an affordable approach to improve LLMs for low-resource languages using consumer hardware. Our source code is available here at https://github.com/bethelmelesse/unifiedcrawl.

UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages

The paper "UnifiedCrawl: Aggregated Common Crawl for Affordable Adaptation of LLMs on Low-Resource Languages" presents a significant contribution towards enhancing the performance of LLMs in low-resource languages by leveraging the entirety of the Common Crawl data. The authors propose an efficient, cost-effective framework called UnifiedCrawl to extract extensive monolingual corpora for low-resource languages using minimal compute resources.

In the domain of NLP, the scarcity of training data for low-resource languages poses a substantial challenge for the performance of LLMs. The paper tackles this issue by introducing a methodology to collect text data from Common Crawl, a large-scale web archive, focusing on languages that are underrepresented in existing datasets. UnifiedCrawl streamlines the extraction process by efficiently filtering the Common Crawl index to target specific languages, thereby constructing significantly larger monolingual datasets than what was previously available.

The authors detail their data extraction pipeline that includes index filtering, selective downloading of WARC files via HTTP Range Requests, and subsequent text extraction using Trafilatura. They further refine the dataset quality through de-duplication techniques, which eliminate repetitive patterns to create a more robust training dataset. This approach to data extraction demonstrates a notable advancement in addressing the challenges of data scarcity for low-resource languages, enabling the utilization of Common Crawl data even on consumer-grade hardware.

Moreover, the paper explores the adaptation of multilingual LLMs through efficient training on collected datasets using Quantized Low-Rank Adaptation (QLoRA). This technique mitigates the high resource demands typical of full fine-tuning by introducing lightweight adapters, allowing for the effective tuning of large models even with limited computing resources. The experiments conducted reveal significant improvements in LLMing perplexity and few-shot prompting scores for Amharic, highlighting the potential of this approach in making AI more accessible and effective for low-resource language communities.

The implications of this research are vast, offering a pathway for democratizing AI technology. By providing a method that reduces the barriers to adapting LLMs for low-resource languages, this work supports linguistic inclusivity and broadens the practical applicability of LLMs globally. In future developments, the methodology could be expanded to additional low-resource languages and further optimized to enhance dataset quality and extraction efficiency.

While this paper successfully illustrates a scalable and accessible solution for low-resource language adaptation, the ongoing challenge remains in thoroughly evaluating model performance across a more diverse set of downstream tasks to ensure robustness and real-world applicability. This research sets a solid foundation for future work aimed at enhancing LLM accessibility and utility in linguistically diverse contexts.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bethel Melesse Tessema (1 paper)
  2. Akhil Kedia (5 papers)
  3. Tae-Sun Chung (2 papers)
Github Logo Streamline Icon: https://streamlinehq.com