Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 169 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

EmoACT: ACT-based Emotion Synthesis

Updated 29 August 2025
  • EmoACT Framework is a modular system that uses Affect Control Theory with EPA computations to generate adaptive emotional displays in artificial agents.
  • It segregates sensory detection, emotion generation, and expressive actuation, allowing real-time mapping of user inputs to affective behaviors.
  • Empirical validation with a Pepper robot demonstrated that high-frequency, ACT-aligned displays significantly improve perceived social agency and transparency.

The EmoACT Framework refers to a set of methodologies for embedding, recognizing, and actuating emotion in artificial agents, with foundational mechanisms derived from Affect Control Theory (ACT). The primary goal is to enhance social interaction, transparency, and naturalness in human–agent interaction by aligning the agents’ affective displays and reasoning with human emotional processes. The following sections detail its theoretical background, architecture, operational principles, empirical validation, technical components, and future advancement—drawn exclusively from the research record (Corrao et al., 16 Apr 2025).

1. Foundations: Affect Control Theory as an Engine for Emotion Synthesis

Affect Control Theory (ACT) provides the mathematical and conceptual substrate for EmoACT. Within ACT, all interactional elements—identities, events, and actions—are represented in a three-dimensional affective space known as EPA (Evaluation: good–bad, Potency: powerful–powerless, Activity: lively–quiet). Agents maintain a “fundamental” affective meaning, corresponding to their identity, and form “transient impressions” as interactions evolve.

Emotions emerge as a discrepancy between the agent’s fundamental identity vector and the updated impression vector:

  • The agent computes the difference in EPA dimensions after observing or participating in interactions.
  • ACT’s formal equations provide quantitative methods to map this difference to a synthetic emotion—ensuring the mapping is continuous, interpretable, and socially contextualized.

This approach affords dynamism: the agent’s emotional display adapts to user input, interaction history, and environmental cues, as opposed to being a static or rule-based affective system.

2. Architecture: Modular Separation of Sensing, Reasoning, and Actuation

The EmoACT framework is designed as a modular system, separating the following computational steps:

  • Affect (Impression Detection): Sensory modules extract affective data from the environment (e.g., facial expressions, gaze, proximity, decision outcomes). These inputs are aggregated and mapped into the EPA impression vector.
  • Feeling (Emotion Generation): Houses the agent’s fixed identity vector. The current impression and the identity are fed into the ACT equations; the result is a synthetic emotion signature in EPA space. Example update formulas include:

emotion[E]=impression[E]identity[E]+1+(impression[A]identity[A])δ\text{emotion}[E] = \text{impression}[E] - \text{identity}[E] + 1 + (\text{impression}[A] - \text{identity}[A]) \cdot \delta

emotion[P]=impression[P]identity[P](impression[A]identity[A])\text{emotion}[P] = \text{impression}[P] - \text{identity}[P] - (\text{impression}[A] - \text{identity}[A])

emotion[A]=impression[A]+identity[A]\text{emotion}[A] = \text{impression}[A] + \text{identity}[A]

where δ\delta is an activity-weighting parameter, “impression” and “identity” are EPA vectors. These calculations are performed in real time, adapting to user interaction.

  • Emotional Behavior (Emotion Expression): The output EPA vector is mapped onto expressive behaviors such as robotic animations and LED color displays. Cosine similarity is applied for discrete emotion categorization: if EPA proximity to a predefined basic emotion vector (e.g., Happiness, Anger) exceeds a given threshold (e.g., 60%), the system actuates that animation; otherwise it defaults to Neutral.

This separation ensures that the framework is platform-agnostic—operable on diverse robots and virtual agents equipped with appropriate sensor and actuator systems.

3. Experimental Validation: HRI, Evaluation Metrics, and Key Findings

EmoACT was empirically evaluated on a Pepper humanoid robot in collaborative storytelling scenarios where participant decision-making modulates affective impressions:

  • Two experimental conditions were deployed:

    1. High-frequency emotion display: emotions updated and displayed at every narrative turn.
    2. Low-frequency emotion display: emotions displayed only at major story decision points.
  • Sensing: facial emotion analysis, gaze, and distance parameters extracted in real time.

  • Expressivity: whole-body animations and eye color changes mapped from computed EPA vectors.

Human perception was measured using the Godspeed Questionnaire (Anthropomorphism, Animacy, Likeability, Perceived Intelligence, Safety) and Agency Experience Questionnaire (emotional and cognitive agency ratings).

Key results:

  • Users rated the robot with high-frequency emotion display as more emotionally and cognitively agentic than the low-frequency or non-emotional baseline.
  • The ACT-derived emotional expressions were detectable and interpreted as genuine affect by participants.
  • Low-frequency displays caused ambiguous perception, sometimes perceived as frenetic or ungrounded.

This experimental evidence demonstrates that regular, ACT-aligned emotional actuation directly augments perceived social agency and transparency in artificial agents.

4. Technical Implementation: Algorithms, State Updates, and Categorization Strategies

The core algorithmic loop of EmoACT involves:

  • Continuous impression update computation from sensor data.
  • Real-time application of EPA-based ACT equations as outlined in the formulas above.
  • Discrete categorization of EPA vectors against a set of basic emotions using cosine similarity.

A table summarizing the modular flow:

Module Input Output
Impression Detection Sensory cues EPA vector
Emotion Generation Impression, Identity New EPA emotion
Emotion Expression EPA vector Animation/color

The modular server architecture is platform-independent, allowing identical core ACT logic across different agent hardware and morphologies.

5. Generalizability and Platform-Agnostic Adaptation

The framework’s modular approach allows adaptation to:

  • Agents with distinct identity profiles (positive, negative, stigmatized).
  • Robots of differing morphologies and virtual agents.
  • Extension to other affective actuation modalities (facial features, gestures, voice).

High portability across platforms stems from reliance on EPA space and ACT’s continuous mapping, facilitating integration into broader human–robot interaction systems.

6. Limitations and Directions for Further Study

Current implementation scope:

  • Identity profile is positive/neutral; expansion to negative or stigmatized identities is suggested.
  • No memory/hysteresis: emotion updates are strictly Markovian, driven by instantaneous perception.
  • Expressive bandwidth is limited to animation and color.

Future work directions include:

  • Adding temporal persistence in affective state (mood continuity).
  • Expanding the ACT module to accommodate hysteresis or identity transitions.
  • Integrating additional models of personality, mood, and advanced multimodal sensors.
  • Broader in-the-wild evaluations to refine expressiveness and user perception metrics.

7. Significance and Impact for Enhanced Social Agency in Artificial Agents

By operationalizing ACT as the generative engine for agent emotion, EmoACT establishes a formal, interpretable, and platform-independent foundation for affective computing in artificial agents. Empirical studies confirm that the frequency and appropriateness of emotional displays have direct and measurable effects on perceived robot agency, animacy, and emotional intelligence. The platform-agnostic design ensures that the core principles and methods are extensible to any agent integrating EPA-based emotional actuation.

The framework advances robust, theory-backed emotion synthesis facilitating naturalistic, transparent, and engaging human–agent interactions in contemporary and future real-world settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to EmoACT Framework.