Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with Context-aware Attention (1804.06657v1)

Published 18 Apr 2018 in cs.CL

Abstract: In this paper we present a deep-learning model that competed at SemEval-2018 Task 2 "Multilingual Emoji Prediction". We participated in subtask A, in which we are called to predict the most likely associated emoji in English tweets. The proposed architecture relies on a Long Short-Term Memory network, augmented with an attention mechanism, that conditions the weight of each word, on a "context vector" which is taken as the aggregation of a tweet's meaning. Moreover, we initialize the embedding layer of our model, with word2vec word embeddings, pretrained on a dataset of 550 million English tweets. Finally, our model does not rely on hand-crafted features or lexicons and is trained end-to-end with back-propagation. We ranked 2nd out of 48 teams.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Christos Baziotis (13 papers)
  2. Nikos Athanasiou (13 papers)
  3. Georgios Paraskevopoulos (26 papers)
  4. Nikolaos Ellinas (23 papers)
  5. Athanasia Kolovou (3 papers)
  6. Alexandros Potamianos (44 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.