Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SeaLLMs -- Large Language Models for Southeast Asia (2312.00738v2)

Published 1 Dec 2023 in cs.CL

Abstract: Despite the remarkable achievements of LLMs in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages. To address this imbalance, we introduce SeaLLMs, an innovative series of LLMs that specifically focuses on Southeast Asian (SEA) languages. SeaLLMs are built upon the Llama-2 model and further advanced through continued pre-training with an extended vocabulary, specialized instruction and alignment tuning to better capture the intricacies of regional languages. This allows them to respect and reflect local cultural norms, customs, stylistic preferences, and legal considerations. Our comprehensive evaluation demonstrates that SeaLLM-13b models exhibit superior performance across a wide spectrum of linguistic tasks and assistant-style instruction-following capabilities relative to comparable open-source models. Moreover, they outperform ChatGPT-3.5 in non-Latin languages, such as Thai, Khmer, Lao, and Burmese, by large margins while remaining lightweight and cost-effective to operate.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (17)
  1. Xuan-Phi Nguyen (22 papers)
  2. Wenxuan Zhang (75 papers)
  3. Xin Li (980 papers)
  4. Mahani Aljunied (6 papers)
  5. Qingyu Tan (9 papers)
  6. Liying Cheng (16 papers)
  7. Guanzheng Chen (9 papers)
  8. Yue Deng (44 papers)
  9. Sen Yang (191 papers)
  10. Chaoqun Liu (38 papers)
  11. Hang Zhang (164 papers)
  12. Lidong Bing (144 papers)
  13. Zhiqiang Hu (48 papers)
  14. Chenhui Shen (7 papers)
  15. Yew Ken Chia (24 papers)
  16. Xingxuan Li (17 papers)
  17. Jianyu Wang (84 papers)
Citations (54)

Summary

Evaluation of SeaLLMs: Addressing Linguistic Disparity in Southeast Asian LLMs

The paper entitled "SeaLLMs - LLMs for Southeast Asia" delineates a comprehensive effort to mitigate the linguistic biases apparent in mainstream LLMs by introducing a series of models specifically aimed at Southeast Asian (SEA) languages. While LLMs have shown prodigious capabilities in numerous language tasks, their output quality diminishes for languages outside high-resource brackets, primarily due to the paucity of relevant training data. This research adequately addresses its objective by advancing models—namely SeaLLM-13B and its variants—that specialize in SEA languages such as Vietnamese, Thai, Indonesian, and underrepresented regional languages like Khmer and Lao, outperforming prominent models like ChatGPT-3.5.

Advanced Linguistic Representation

The authors expand upon Meta's Llama-2 architecture, tailoring it to accommodate SEA languages through meticulous vocabulary elongation that resolves the inefficiencies of tokenization for non-Latin scripts. This extension process adds 16,512 new tokens that substantially compress text representation, as evidenced by reduction ratios reported—such as a 2.7-fold reduction in Thai token length. This refined tokenization markedly enhances the models' contextual processing capability, a fundamental challenge when dealing with extensive texts in low-resource languages.

Training Paradigm and Performance

The architecture's refinement is further cemented through a four-stage training protocol encompassing continual pre-training, a hybrid pre-training with supervised fine-tuning, targeted supervised fine-tuning, and self-preferencing optimization. This meticulous training regimen ensures that SeaLLMs not only assimilate general language patterns from Llama-2's extensive pre-training dataset but also excel in nuanced comprehension and generation tasks needed for SEA linguistic variety.

Empirical evaluations highlight SeaLLM-13B's preeminence, especially over ChatGPT-3.5 in non-Latin SEA languages, outstripping it by significant margins for languages like Khmer and Burmese. The results of the comprehensive Sea-bench assessments reinforce the hypothesis that SeaLLMs are not only linguistically inclusive but also cost-effective due to their lightweight and efficient architecture.

Theoretical and Practical Implications

Theoretical advancements of this research underscore the possibility of region-specific adaptations of LLMs that remain competitive with universally dominant models in multilingual tasks. By implementing vocabular expansion and culturally nuanced tuning, SeaLLMs offer a blueprint for developing LLMs that respect linguistic diversity while maintaining high performance. Practically, the democratization of AI tools through SeaLLMs stands to provide underserved populations with enhanced accessibility to advanced language technologies, paving the way for localized AI applications that resonate with cultural and social norms.

Future Prospects

Anticipating future directions, this work opens avenues for continuous adaptation of LLMs using hybrid data strategies—formalizing an iterative cycle of model improvement as more training data becomes available. The implications for AI in facilitating communications across diverse linguistic landscapes are profound, particularly for regions with rich linguistic tapestries like Southeast Asia. Subsequent exploration may extend these methodologies to other underrepresented languages and evaluate longitudinal impacts across varied technological applications.

In conclusion, the development of SeaLLMs represents a substantive stride towards addressing linguistic inequality in AI, providing specialized tools that enhance understanding and communication within Southeast Asia while emulating the best practices of leading LLMs.

Github Logo Streamline Icon: https://streamlinehq.com