Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning (1804.06658v1)

Published 18 Apr 2018 in cs.CL

Abstract: In this paper we present deep-learning models that submitted to the SemEval-2018 Task~1 competition: "Affect in Tweets". We participated in all subtasks for English tweets. We propose a Bi-LSTM architecture equipped with a multi-layer self attention mechanism. The attention mechanism improves the model performance and allows us to identify salient words in tweets, as well as gain insight into the models making them more interpretable. Our model utilizes a set of word2vec word embeddings trained on a large collection of 550 million Twitter messages, augmented by a set of word affective features. Due to the limited amount of task-specific training data, we opted for a transfer learning approach by pretraining the Bi-LSTMs on the dataset of Semeval 2017, Task 4A. The proposed approach ranked 1st in Subtask E "Multi-Label Emotion Classification", 2nd in Subtask A "Emotion Intensity Regression" and achieved competitive results in other subtasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Christos Baziotis (13 papers)
  2. Nikos Athanasiou (13 papers)
  3. Alexandra Chronopoulou (24 papers)
  4. Athanasia Kolovou (3 papers)
  5. Georgios Paraskevopoulos (26 papers)
  6. Nikolaos Ellinas (23 papers)
  7. Shrikanth Narayanan (151 papers)
  8. Alexandros Potamianos (44 papers)
Citations (121)