Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 102 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 30 tok/s
GPT-5 High 27 tok/s Pro
GPT-4o 110 tok/s
GPT OSS 120B 475 tok/s Pro
Kimi K2 203 tok/s Pro
2000 character limit reached

A Cross-Domain Approach for Continuous Impression Recognition from Dyadic Audio-Visual-Physio Signals (2203.13932v1)

Published 25 Mar 2022 in cs.MM, cs.CV, cs.SD, and eess.AS

Abstract: The impression we make on others depends not only on what we say, but also, to a large extent, on how we say it. As a sub-branch of affective computing and social signal processing, impression recognition has proven critical in both human-human conversations and spoken dialogue systems. However, most research has studied impressions only from the signals expressed by the emitter, ignoring the response from the receiver. In this paper, we perform impression recognition using a proposed cross-domain architecture on the dyadic IMPRESSION dataset. This improved architecture makes use of cross-domain attention and regularization. The cross-domain attention consists of intra- and inter-attention mechanisms, which capture intra- and inter-domain relatedness, respectively. The cross-domain regularization includes knowledge distillation and similarity enhancement losses, which strengthen the feature connections between the emitter and receiver. The experimental evaluation verified the effectiveness of our approach. Our approach achieved a concordance correlation coefficient of 0.770 in competence dimension and 0.748 in warmth dimension.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.