Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation (2409.09135v1)

Published 13 Sep 2024 in cs.AI, cs.CL, cs.HC, and cs.LG

Abstract: Over the past decade, wearable computing devices (smart glasses'') have undergone remarkable advancements in sensor technology, design, and processing power, ushering in a new era of opportunity for high-density human behavior data. Equipped with wearable cameras, these glasses offer a unique opportunity to analyze non-verbal behavior in natural settings as individuals interact. Our focus lies in predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion. Leveraging such analyses may revolutionize our understanding of human communication, foster more effective collaboration in professional environments, provide better mental health support through empathetic virtual interactions, and enhance accessibility for those with communication barriers. In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation. We introduce a novel fusion strategy using LLMs to integrate multiple behavior modalities into amultimodal transcript'' that can be processed by an LLM for behavioral reasoning tasks. Remarkably, this method achieves performance comparable to established fusion techniques even in its preliminary implementation, indicating strong potential for further research and optimization. This fusion method is one of the first to approach ``reasoning'' about real-world human behavior through a LLM. Smart glasses provide us the ability to unobtrusively gather high-density multimodal data on human behavior, paving the way for new approaches to understanding and improving human communication with the potential for important societal benefits. The features and data collected during the studies will be made publicly available to promote further research.

Summary

  • The paper presents a novel multimodal fusion method leveraging LLMs and smart glasses to predict engagement in natural conversation.
  • It integrates video, audio, gaze, and facial expression data to simulate participant perspectives and enhance prediction accuracy.
  • Findings reveal that although classical models often excel, LLM fusion holds promise for advancing socially aware technologies and AR applications.

Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation

The paper, "Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation," presents a nuanced paper on the prediction of engagement levels in dyadic interactions using multimodal data captured through wearable computing devices, specifically smart glasses. The focus lies on addressing the inherent challenges in assessing engagement during natural conversations and proposing innovative methods to quantify this engagement through multimodal data fusion with LLMs.

Introduction

The paper recognizes the pivotal role of engagement in effective communication and explores the potential of smart glasses in natural, real-world social contexts to capture unimpaired social behavior. The technology's utility is demonstrated through its ability to unobtrusively gather high-resolution data on visual, auditory, and motion-based cues, presenting a significant leap beyond traditional laboratory-constrained studies.

Data Collection

The dataset introduced comprises video and audio recordings, eye tracking information, and self-reported demographic, political, and personality factors from participants engaged in natural, unscripted conversations. This dataset captures interactions from an egocentric viewpoint, providing a rich ground for analysis in contrast to third-person viewpoint datasets used in earlier works.

Methodology

The paper primarily contributes through two approaches: the introduction of a novel dataset and the development of a fusion method to predict engagement. The authors employ smart glasses to gather data on participants' head orientations, gaze directions, and facial expressions during conversations. This information is then processed using pre-trained models like OpenFace for facial action unit recognition and MediaPipe for facial landmarks and gaze estimation.

The fusion approach involves utilizing an LLM as a reasoning engine. The model simulates each participant's perspective by generating responses to post-session engagement questionnaires based on multimodal transcripts that integrate dialogue, gaze, and facial expression data. This novel LLM fusion technique is compared against classical fusion methods—such as kk-nearest neighbors, support vector machines, random forests, and neural network-based models.

Findings

Classical Fusion Performance

The paper finds that while traditional machine learning models like SVM and RF achieve robust performance, the bidirectional long short-term memory networks and multi-layer perceptrons display varying efficacy. These classical methods, particularly SVM and RF, often outperform the LLM-based approaches in predicting exact engagement scores.

LLM Fusion Techniques

Interestingly, the LLM fusion methods demonstrate comparable performance to classical models in predicting engagement levels. The ablation experiments reveal that incorporating facial expression and gaze data into the dialogue transcript enhances the LLM's ability to predict engagement accurately. Yet, baseline LLM models sometimes falter when relying solely on text-based inputs, underscoring the importance of multimodal data integration.

Valence and Arousal Prediction

When evaluating the LLM’s capacity to predict the valence (positive or negative attitudes) and arousal (intensity of emotional engagement), results highlight its reliability in identifying positive engagements. However, predicting neutral responses or the arousal intensity remains challenging, shedding light on the nuanced nature of engagement and the complexity of human conversational dynamics.

Implications and Future Directions

This paper's findings underscore the promising directions for socially aware technologies, augmented reality systems, and assistive communication tools for individuals with social or sensory impairments. The implementation of LLMs for multimodal data fusion presents a flexible framework that can be further refined with enhanced pre-trained models and larger, more diverse datasets.

Future research directions may involve expanding the dataset to include a broader demographic and investigating the integration of additional multimodal cues, such as physiological signals and contextual factors. Additionally, addressing the inherent biases in LLMs and pre-trained models remains crucial to ensuring the reliability and ethical application of these technologies in real-world scenarios.

In conclusion, the exploration of LLMs for engagement prediction in natural conversation captures a nuanced understanding of human interactions and opens new avenues for developing sophisticated, adaptive social technologies. This paper provides foundational insights and methodologies that can be expanded upon to build more socially attuned and responsive computational systems.

X Twitter Logo Streamline Icon: https://streamlinehq.com