Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning (2406.11161v2)

Published 17 Jun 2024 in cs.AI and cs.MM

Abstract: Accurate emotion perception is crucial for various applications, including human-computer interaction, education, and counseling. However, traditional single-modality approaches often fail to capture the complexity of real-world emotional expressions, which are inherently multimodal. Moreover, existing Multimodal LLMs (MLLMs) face challenges in integrating audio and recognizing subtle facial micro-expressions. To address this, we introduce the MERR dataset, containing 28,618 coarse-grained and 4,487 fine-grained annotated samples across diverse emotional categories. This dataset enables models to learn from varied scenarios and generalize to real-world applications. Furthermore, we propose Emotion-LLaMA, a model that seamlessly integrates audio, visual, and textual inputs through emotion-specific encoders. By aligning features into a shared space and employing a modified LLaMA model with instruction tuning, Emotion-LLaMA significantly enhances both emotional recognition and reasoning capabilities. Extensive evaluations show Emotion-LLaMA outperforms other MLLMs, achieving top scores in Clue Overlap (7.83) and Label Overlap (6.25) on EMER, an F1 score of 0.9036 on MER2023-SEMI challenge, and the highest UAR (45.59) and WAR (59.37) in zero-shot evaluations on DFEW dataset.

The paper "Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning" (Cheng et al., 17 Jun 2024 ) introduces a novel multimodal LLM (MLLM) designed for advanced emotion recognition and reasoning. The key contributions and findings of this work are:

  • MERR Dataset: The authors created a new Multimodal Emotion Recognition and Reasoning (MERR) dataset. This dataset contains 28,618 coarse-grained and 4,487 fine-grained annotated samples, encompassing a broad spectrum of emotional categories. The MERR dataset addresses the limitations of existing multimodal emotion instruction datasets and facilitates learning across diverse scenarios.
  • Emotion-LLaMA Model: The paper details the development of Emotion-LLaMA, an MLLM integrating audio, visual, and textual inputs through specialized emotion encoders. The model uses HuBERT for processing audio data and employs multiview visual encoders, including MAE, VideoMAE, and EVA, to capture detailed facial information. Instruction tuning is used to refine emotional recognition and reasoning capabilities.
  • Performance Benchmarking: Emotion-LLaMA was evaluated extensively, demonstrating superior performance compared to other MLLMs across multiple datasets. Key performance metrics include:
    • Clue Overlap score of 7.83 on the EMER dataset
    • Label Overlap score of 6.25 on the EMER dataset
    • F1 score of 0.9036 on the MER2023 challenge
    • Unweighted Average Recall (UAR) of 45.59 in zero-shot evaluations on the DFEW dataset
    • Weighted Average Recall (WAR) of 59.37 in zero-shot evaluations on the DFEW dataset

The paper's primary finding is that Emotion-LLaMA significantly improves emotional recognition and reasoning through effective multimodal input integration and instruction tuning. This establishes a new state-of-the-art for multimodal emotion analysis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zebang Cheng (10 papers)
  2. Zhi-Qi Cheng (61 papers)
  3. Jun-Yan He (27 papers)
  4. Jingdong Sun (11 papers)
  5. Kai Wang (624 papers)
  6. Yuxiang Lin (7 papers)
  7. Zheng Lian (51 papers)
  8. Xiaojiang Peng (59 papers)
  9. Alexander Hauptmann (46 papers)
Citations (7)