Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Multi-Task Model for Sarcasm Detection and Sentiment Analysis in Arabic Language (2106.12488v1)

Published 23 Jun 2021 in cs.CL

Abstract: The prominence of figurative language devices, such as sarcasm and irony, poses serious challenges for Arabic Sentiment Analysis (SA). While previous research works tackle SA and sarcasm detection separately, this paper introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing knowledge interaction between the two tasks. Our MTL model's architecture consists of a Bidirectional Encoder Representation from Transformers (BERT) model, a multi-task attention interaction module, and two task classifiers. The overall obtained results show that our proposed model outperforms its single-task counterparts on both SA and sarcasm detection sub-tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Abdelkader El Mahdaouy (7 papers)
  2. Abdellah El Mekki (13 papers)
  3. Kabil Essefar (3 papers)
  4. Nabil El Mamoun (2 papers)
  5. Ismail Berrada (20 papers)
  6. Ahmed Khoumsi (4 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.