Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer based neural networks for emotion recognition in conversations (2405.11222v1)

Published 18 May 2024 in cs.CL

Abstract: This paper outlines the approach of the ISDS-NLP team in the SemEval 2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF). For Subtask 1 we obtained a weighted F1 score of 0.43 and placed 12 in the leaderboard. We investigate two distinct approaches: Masked LLMing (MLM) and Causal LLMing (CLM). For MLM, we employ pre-trained BERT-like models in a multilingual setting, fine-tuning them with a classifier to predict emotions. Experiments with varying input lengths, classifier architectures, and fine-tuning strategies demonstrate the effectiveness of this approach. Additionally, we utilize Mistral 7B Instruct V0.2, a state-of-the-art model, applying zero-shot and few-shot prompting techniques. Our findings indicate that while Mistral shows promise, MLMs currently outperform them in sentence-level emotion classification.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Language models are few-shot learners.
  2. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116.
  3. GoEmotions: A Dataset of Fine-Grained Emotions. In 58th Annual Meeting of the Association for Computational Linguistics (ACL).
  4. Bert: Pre-training of deep bidirectional transformers for language understanding.
  5. Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3–4):169–200.
  6. Mistral 7B. arXiv preprint arXiv:2310.06825.
  7. Mixtral of experts.
  8. SOLAR 10.7B: Scaling large language models with simple yet effective depth up-scaling.
  9. Semeval 2024 – task 10: Emotion discovery and reasoning its flip in conversation (ediref). In Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics.
  10. Discovering emotion and reasoning its flip in multi-party conversations using masked memory network and transformer.
  11. Discovering emotion and reasoning its flip in multi-party conversations using masked memory network and transformer. Knowledge-Based Systems, 240:108112.
  12. How to fine-tune bert for text classification?
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Claudiu Creanga (5 papers)
  2. Liviu P. Dinu (23 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets