Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Customising General Large Language Models for Specialised Emotion Recognition Tasks (2310.14225v1)

Published 22 Oct 2023 in cs.CL

Abstract: The advent of LLMs has gained tremendous attention over the past year. Previous studies have shown the astonishing performance of LLMs not only in other tasks but also in emotion recognition in terms of accuracy, universality, explanation, robustness, few/zero-shot learning, and others. Leveraging the capability of LLMs inevitably becomes an essential solution for emotion recognition. To this end, we further comprehensively investigate how LLMs perform in linguistic emotion recognition if we concentrate on this specific task. Specifically, we exemplify a publicly available and widely used LLM -- Chat General LLM, and customise it for our target by using two different modal adaptation techniques, i.e., deep prompt tuning and low-rank adaptation. The experimental results obtained on six widely used datasets present that the adapted LLM can easily outperform other state-of-the-art but specialised deep models. This indicates the strong transferability and feasibility of LLMs in the field of emotion recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Liyizhe Peng (2 papers)
  2. Zixing Zhang (26 papers)
  3. Tao Pang (14 papers)
  4. Jing Han (60 papers)
  5. Huan Zhao (109 papers)
  6. Hao Chen (1005 papers)
  7. Björn W. Schuller (153 papers)
Citations (10)