Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue (2303.00146v4)

Published 1 Mar 2023 in cs.HC, cs.RO, cs.SD, and eess.AS

Abstract: Current Spoken Dialogue Systems (SDSs) often serve as passive listeners that respond only after receiving user speech. To achieve human-like dialogue, we propose a novel future prediction architecture that allows an SDS to anticipate future affective reactions based on its current behaviors before the user speaks. In this work, we investigate two scenarios: speech and laughter. In speech, we propose to predict the user's future emotion based on its temporal relationship with the system's current emotion and its causal relationship with the system's current Dialogue Act (DA). In laughter, we propose to predict the occurrence and type of the user's laughter using the system's laughter behaviors in the current turn. Preliminary analysis of human-robot dialogue demonstrated synchronicity in the emotions and laughter displayed by the human and robot, as well as DA-emotion causality in their dialogue. This verifies that our architecture can contribute to the development of an anticipatory SDS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuanchao Li (24 papers)
  2. Koji Inoue (28 papers)
  3. Leimin Tian (12 papers)
  4. Changzeng Fu (6 papers)
  5. Carlos Ishi (2 papers)
  6. Hiroshi Ishiguro (19 papers)
  7. Tatsuya Kawahara (61 papers)
  8. Catherine Lai (24 papers)
Citations (3)