Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Refashioning Emotion Recognition Modelling: The Advent of Generalised Large Models (2308.11578v1)

Published 21 Aug 2023 in cs.CL, cs.AI, and cs.LG

Abstract: After the inception of emotion recognition or affective computing, it has increasingly become an active research topic due to its broad applications. Over the past couple of decades, emotion recognition models have gradually migrated from statistically shallow models to neural network-based deep models, which can significantly boost the performance of emotion recognition models and consistently achieve the best results on different benchmarks. Therefore, in recent years, deep models have always been considered the first option for emotion recognition. However, the debut of LLMs, such as ChatGPT, has remarkably astonished the world due to their emerged capabilities of zero/few-shot learning, in-context learning, chain-of-thought, and others that are never shown in previous deep models. In the present paper, we comprehensively investigate how the LLMs perform in emotion recognition in terms of diverse aspects, including in-context learning, few-short learning, accuracy, generalisation, and explanation. Moreover, we offer some insights and pose other potential challenges, hoping to ignite broader discussions about enhancing emotion recognition in the new era of advanced and generalised large models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zixing Zhang (26 papers)
  2. Liyizhe Peng (2 papers)
  3. Tao Pang (14 papers)
  4. Jing Han (60 papers)
  5. Huan Zhao (109 papers)
  6. Bjorn W. Schuller (9 papers)
Citations (10)