Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Subword Level Language Model for Bangla Language (1911.07613v1)

Published 15 Nov 2019 in cs.CL and cs.LG

Abstract: LLMs are at the core of natural language processing. The ability to represent natural language gives rise to its applications in numerous NLP tasks including text classification, summarization, and translation. Research in this area is very limited in Bangla due to the scarcity of resources, except for some count-based models and very recent neural LLMs being proposed, which are all based on words and limited in practical tasks due to their high perplexity. This paper attempts to approach this issue of perplexity and proposes a subword level neural LLM with the AWD-LSTM architecture and various other techniques suitable for training in Bangla language. The model is trained on a corpus of Bangla newspaper articles of an appreciable size consisting of more than 28.5 million word tokens. The performance comparison with various other models depicts the significant reduction in perplexity the proposed model provides, reaching as low as 39.84, in just 20 epochs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Aisha Khatun (9 papers)
  2. Anisur Rahman (5 papers)
  3. Hemayet Ahmed Chowdhury (4 papers)
  4. Ayesha Tasnim (4 papers)
  5. Md. Saiful Islam (57 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.