Customising General Large Language Models for Specialised Emotion Recognition Tasks (2310.14225v1)
Abstract: The advent of LLMs has gained tremendous attention over the past year. Previous studies have shown the astonishing performance of LLMs not only in other tasks but also in emotion recognition in terms of accuracy, universality, explanation, robustness, few/zero-shot learning, and others. Leveraging the capability of LLMs inevitably becomes an essential solution for emotion recognition. To this end, we further comprehensively investigate how LLMs perform in linguistic emotion recognition if we concentrate on this specific task. Specifically, we exemplify a publicly available and widely used LLM -- Chat General LLM, and customise it for our target by using two different modal adaptation techniques, i.e., deep prompt tuning and low-rank adaptation. The experimental results obtained on six widely used datasets present that the adapted LLM can easily outperform other state-of-the-art but specialised deep models. This indicates the strong transferability and feasibility of LLMs in the field of emotion recognition.
- Liyizhe Peng (2 papers)
- Zixing Zhang (26 papers)
- Tao Pang (14 papers)
- Jing Han (60 papers)
- Huan Zhao (109 papers)
- Hao Chen (1005 papers)
- Björn W. Schuller (153 papers)