Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emoji-based Fine-grained Attention Network for Sentiment Analysis in the Microblog Comments (2206.12262v1)

Published 10 Jun 2022 in cs.CL

Abstract: Microblogs have become a social platform for people to express their emotions in real-time, and it is a trend to analyze user emotional tendencies from the information on Microblogs. The dynamic features of emojis can affect the sentiment polarity of microblog texts. Since existing models seldom consider the diversity of emoji sentiment polarity,the paper propose a microblog sentiment classification model based on ALBERT-FAET. We obtain text embedding via ALBERT pretraining model and learn the inter-emoji embedding with an attention-based LSTM network. In addition, a fine-grained attention mechanism is proposed to capture the word-level interactions between plain text and emoji. Finally, we concatenate these features and feed them into a CNN classifier to predict the sentiment labels of the microblogs. To verify the effectiveness of the model and the fine-grained attention network, we conduct comparison experiments and ablation experiments. The comparison experiments show that the model outperforms previous methods in three evaluation indicators (accuracy, precision, and recall) and the model can significantly improve sentiment classification. The ablation experiments show that compared with ALBERT-AET, the proposed model ALBERT-FAET is better in the metrics, indicating that the fine-grained attention network can understand the diversified information of emoticons.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Deng Yang (1 paper)
  2. Liu Kejian (1 paper)
  3. Yang Cheng (50 papers)
  4. Feng Yuanyuan (1 paper)
  5. Li Weihao (1 paper)
Citations (3)

Summary

We haven't generated a summary for this paper yet.