Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextual Emotion Recognition using Large Vision Language Models (2405.08992v1)

Published 14 May 2024 in cs.CV

Abstract: "How does the person in the bounding box feel?" Achieving human-level recognition of the apparent emotion of a person in real world situations remains an unsolved task in computer vision. Facial expressions are not enough: body pose, contextual knowledge, and commonsense reasoning all contribute to how humans perform this emotional theory of mind task. In this paper, we examine two major approaches enabled by recent large vision LLMs: 1) image captioning followed by a language-only LLM, and 2) vision LLMs, under zero-shot and fine-tuned setups. We evaluate the methods on the Emotions in Context (EMOTIC) dataset and demonstrate that a vision LLM, fine-tuned even on a small dataset, can significantly outperform traditional baselines. The results of this work aim to help robots and agents perform emotionally sensitive decision-making and interaction in the future.

Contextual Emotion Recognition using Large Vision LLMs

The paper "Contextual Emotion Recognition using Large Vision LLMs" presents an exploration into the advancement of contextual emotion recognition by leveraging Large Vision LLMs (VLMs) and LLMs. The authors identify the significant limitations of traditional emotion recognition systems, particularly their over-reliance on facial expressions and failure to incorporate contextual information such as body pose and environmental factors. This reliance has historically led to lower accuracy in emotion recognition tasks, especially when facing novel scenarios.

Traditionally, emotion recognition systems that focus solely on facial and bodily expressions fall short in addressing the complex human emotional theory of mind, especially when contextual and commonsense knowledge are absent. The research investigates two main approaches: 1) a two-phased method involving image captioning followed by language inference using LLMs, and 2) end-to-end models employing VLMs. These approaches were evaluated using the Emotions in Context (EMOTIC) dataset, which provides a unique challenge due to its inclusion of diverse contextual and environmental factors for emotion annotation.

Methodology and Evaluation

The researchers implemented a two-phased approach using CLIP for generating image captions, followed by leveraging LLMs such as GPT-4 for emotional inference. This narrative captioning technique, referred to as NarraCap, combined gender, age, activity, and context in formulating captions. The effectiveness of this method was compared to traditional captions generated using ExpansionNet, evaluating the emotional inference through LLMs. The VLM approach encompassed zero-shot learning and fine-tuning models like CLIP, GPT-4 Vision, and LLaVA.

The evaluation hinges on performance metrics such as precision, recall, F1 score, hamming loss, and subset accuracy. Results indicate that fine-tuning VLMs like LLaVA on even a small dataset outperform traditional baselines. Notably, the fine-tuned LLaVA achieved the highest F1 score, displaying robustness in emotion label prediction. The research emphasizes the importance of including contextual image details, suggesting that understanding actions and environments within an image significantly improves emotion recognition accuracy.

Implications and Future Developments

The findings impart several practical and theoretical implications. Firstly, the integration of contextual information with VLMs and LLMs heralds improved emotional reasoning, essential for developing socially intelligent AI agents. These enhancements can facilitate better emotionally sensitive human-robot interactions. Furthermore, the use of fine-tuned VLMs demonstrates the potential for effective emotion recognition models trained on limited data, hinting at cost-effective solutions in resource-constrained settings.

This paper opens several avenues for future research. Enhancements in the narrative captioning process, particularly the inclusion of social and object interactions, can potentially boost the efficacy of emotion predictions. Moreover, overcoming the challenges related to visual markers such as bounding boxes could further refine the performance of VLMs. Expanding the scope to include other datasets and deploying models in diverse, real-world scenarios will be critical for assessing the generalization capability of these systems.

In conclusion, while the paper showcases significant advancements in contextual emotion recognition using large models, it also underscores the complexity and multifaceted nature of this domain, suggesting a continuum of exploration and methodical advancements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. M. Pantic and L. J. Rothkrantz, “Expert system for automatic analysis of facial expressions,” Image and Vision Computing, vol. 18, no. 11, pp. 881–905, 2000.
  2. L. F. Barrett, B. Mesquita, and M. Gendron, “Context in emotion perception,” Current directions in psychological science, vol. 20, no. 5, pp. 286–290, 2011.
  3. D. Lopez-Paz, “From dependence to causation,” arXiv, 2016.
  4. A. Mittel and S. Tripathi, “Peri: Part aware emotion recognition in the wild,” in ECCV 2022 Workshops.   Springer, 2023, pp. 76–92.
  5. L. et al., “The role of language in emotion: Predictions from psychological constructionism,” Frontiers in psychology, vol. 6, p. 444, 2015.
  6. K. A. Lindquist and M. Gendron, “What’s in a word? language constructs emotion perception,” Emotion Review, vol. 5, no. 1, pp. 66–71, 2013.
  7. S. Keen and S. Keen, “Narrative emotions,” Narrative Form: Revised and Expanded Second Edition, pp. 152–161, 2015.
  8. OpenAI, “Gpt-4 technical report,” 2023.
  9. M. D. Resnik, “The context principle in frege’s philosophy,” Philosophy and Phenomenological Research, vol. 27, no. 3, pp. 356–365, 1967.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yasaman Etesam (4 papers)
  2. Chuxuan Zhang (4 papers)
  3. Angelica Lim (21 papers)
  4. Özge Nilay Yalçın (3 papers)
Citations (1)
Youtube Logo Streamline Icon: https://streamlinehq.com