Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification (2404.01805v1)

Published 2 Apr 2024 in cs.LG

Abstract: Emotion detection in textual data has received growing interest in recent years, as it is pivotal for developing empathetic human-computer interaction systems. This paper introduces a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions. Initially, we establish a baseline by training a transformer-based model for standard emotion classification, achieving state-of-the-art performance. We argue that not all misclassifications are of the same importance, as there are perceptual similarities among emotional classes. We thus redefine the emotion labeling problem by shifting it from a traditional classification model to an ordinal classification one, where discrete emotions are arranged in a sequential order according to their valence levels. Finally, we propose a method that performs ordinal classification in the two-dimensional emotion space, considering both valence and arousal scales. The results show that our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Bert-cnn: A deep learning model for detecting emotions from text. Computers, Materials & Continua, 71(2).
  2. Comparative analyses of bert, roberta, distilbert, and xlnet for text-based emotion recognition. In 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), pages 117–121. IEEE.
  3. Emotion and sentiment analysis of tweets using bert. In EDBT/ICDT Workshops, volume 3.
  4. Diogo Cortiz. 2021. Exploring transformers in emotion recognition: a comparison of bert, distillbert, roberta, xlnet and electra. arXiv preprint arXiv:2104.02041.
  5. Goemotions: A dataset of fine-grained emotions. arXiv preprint arXiv:2005.00547.
  6. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  7. Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169–200.
  8. Lisa Feldman Barrett and James A Russell. 1998. Independence and bipolarity in the structure of current affect. Journal of personality and social psychology, 74(4):967.
  9. Automatically classifying emotions based on text: A comparative exploration of different datasets. In 2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI), pages 342–346. IEEE.
  10. Puneet Kumar and Balasubramanian Raman. 2022. A bert based dual-channel explainable text emotion recognition system. Neural Networks, 150:392–407.
  11. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  12. Georgios Paltoglou and Michael Thelwall. 2012. Seeing stars of valence and arousal in blog posts. IEEE Transactions on Affective Computing, 4(1):116–123.
  13. Dimensional emotion detection from categorical emotion. arXiv preprint arXiv:1911.02499.
  14. James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology, 39(6):1161.
  15. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media. arXiv preprint arXiv:2007.13184.
  16. Klaus R Scherer. 2005. What are emotions? and how can they be measured? Social science information, 44(4):695–729.
  17. KR Scherer and H Wallbott. 1990. International survey on emotion antecedents and reactions (isear).
  18. Varsha Suresh and Desmond C Ong. 2021. Not all negatives are equal: Label-aware contrastive loss for fine-grained text classification. arXiv preprint arXiv:2109.05427.
  19. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets