Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer-based approaches to Sentiment Detection (2303.07292v1)

Published 13 Mar 2023 in cs.CL

Abstract: The use of transfer learning methods is largely responsible for the present breakthrough in Natural Learning Processing (NLP) tasks across multiple domains. In order to solve the problem of sentiment detection, we examined the performance of four different types of well-known state-of-the-art transformer models for text classification. Models such as Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pre-training Approach (RoBERTa), a distilled version of BERT (DistilBERT), and a large bidirectional neural network architecture (XLNet) were proposed. The performance of the four models that were used to detect disaster in the text was compared. All the models performed well enough, indicating that transformer-based models are suitable for the detection of disaster in text. The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions. Furthermore, we discovered that the learning algorithms' performance was influenced by the pre-processing techniques, the nature of words in the vocabulary, unbalanced labeling, and the model parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Olumide Ebenezer Ojo (4 papers)
  2. Hoang Thang Ta (3 papers)
  3. Alexander Gelbukh (52 papers)
  4. Hiram Calvo (5 papers)
  5. Olaronke Oluwayemisi Adebanji (2 papers)
  6. Grigori Sidorov (45 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.