Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust and Consistent Estimation of Word Embedding for Bangla Language by fine-tuning Word2Vec Model (2010.13404v3)

Published 26 Oct 2020 in cs.CL and cs.LG

Abstract: Word embedding or vector representation of word holds syntactical and semantic characteristics of a word which can be an informative feature for any machine learning-based models of natural language processing. There are several deep learning-based models for the vectorization of words like word2vec, fasttext, gensim, glove, etc. In this study, we analyze word2vec model for learning word vectors by tuning different hyper-parameters and present the most effective word embedding for Bangla language. For testing the performances of different word embeddings generated by fine-tuning of word2vec model, we perform both intrinsic and extrinsic evaluations. We cluster the word vectors to examine the relational similarity of words for intrinsic evaluation and also use different word embeddings as the feature of news article classifier for extrinsic evaluation. From our experiment, we discover that the word vectors with 300 dimensions, generated from "skip-gram" method of word2vec model using the sliding window size of 4, are giving the most robust vector representations for Bangla language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Rifat Rahman (2 papers)
Citations (8)