3D Emotional Artifacts
- 3D emotional artifacts are persistent, spatially-structured objects that map measurable and subjective emotions onto 3D forms using automated and interactive design methodologies.
- They employ computational pipelines integrating sensor data, neural models, and parametric mappings to create dynamic, user-responsive visualizations and installations.
- Applications span intelligent avatars, personalized biosignal sculptures, and immersive mixed-reality systems that facilitate collective emotional reflection and art therapy.
Three-dimensional (3D) emotional artifacts are persistent, spatially-structured objects—physical or virtual—that encode, reflect, or communicate affective states, traits, or histories. These artifacts emerge at the intersection of affective computing, data physicalization, embodied interaction, and expressive AI, and can materialize emotions via parametric geometric mappings, multimodal data transformations, and interactive or generative processes. Contemporary 3D emotional artifacts span intelligent facial avatar animation, personalized sculptures of physiological data, mixed-reality art therapy systems, AI-assisted affective design, and real-time installations that mirror collective emotion. This article surveys the modeling methods, system architectures, geometric mappings, and evaluation protocols that structure current research in this advanced domain.
1. Conceptual Definitions and Application Domains
3D emotional artifacts encode affective information in persistent spatial form, leveraging 3D geometry, color, dynamic structure, and materiality. Their key defining property is the mapping of subjective or measured emotional states—obtained from behavioral signals, physiological sensors, natural language, or context—into 3D forms for self-reflection, communication, or interaction.
Practical domains include:
- Emotionally expressive virtual avatars: Highly controllable 3D talking heads responding to speech, with explicit emotion modulation (Nocentini et al., 19 Mar 2024, Liang et al., 29 Apr 2024, Wang et al., 7 Oct 2024, Daněček et al., 2023).
- Personalized data sculptures: Tangible objects derived from biosignals (e.g., EEG, heart rate, breath), intended for self-discovery and well-being (Ortoleva et al., 16 May 2024, Nasri et al., 15 Dec 2025).
- Mixed reality and MR art therapy: Systems transforming real-time biosignals into virtual “emotional sculptures” for embodied emotional journaling (Nasri et al., 15 Dec 2025).
- Affective physicalization design tools: AI-driven platforms enabling users to map extracted emotion tokens to parametric 3D forms for fabrication or visualization (Wu et al., 26 Sep 2025).
- Immersive multi-user installations: Collective mood reflection through dynamic, data-driven 3D environments (Marhamati et al., 2023, Liu et al., 13 Feb 2025).
- Dream reliving and narrative archiving: Generative AI systems that encode dream sentiment and content in dynamic 3D point clouds, modulated by emotional axes (Liu et al., 13 Feb 2025).
2. Computational Pipelines and System Architectures
System architectures for 3D emotional artifacts commonly feature a staged signal-processing pipeline: input capture, affective representation, mapping to 3D parameters, generative/augmentation models, and rendering or fabrication.
Table: Characteristic Pipelines for 3D Emotional Artifact Systems
| System Type | Affect Sensing | Representation | 3D Mapping/Generation |
|---|---|---|---|
| Expressive avatars | Speech, text, emotion label | Latents, embeddings | Autoregressive or VAE models, rig params, NeRFs (Liang et al., 29 Apr 2024Wang et al., 7 Oct 2024Daněček et al., 2023) |
| Data sculpture (EEG) | EEG, heart rate | Relative band power | Height, thickness, curvature, color (Ortoleva et al., 16 May 2024) |
| MR art therapy | Respiration, HRV, eyes | Normalized biosignal | Color, pulsing, jitter, mesh deformation (Nasri et al., 15 Dec 2025) |
| AI-assisted physicalization | Language narrative | Extracted tokens/int. (LLM) | User-mapped param to geometric attributes (Wu et al., 26 Sep 2025) |
| Dream reliving | Transcribed speech | LLM sentiment, valence/arousal | Text-to-3D diffusion models, point clouds (Liu et al., 13 Feb 2025) |
The majority employ modular deep models (e.g., transformer encoders, autoencoders, CNNs, graph networks), with architectural features attuned to the particular emotion→geometry mapping and application domain.
3. Data Sources, Representation, and Emotional Modeling
Emotionally relevant data is ingested from a variety of sources, each demanding domain-specific preprocessing and representation strategies:
- Speech-driven artifacts: Input raw waveform, upsampled, windowed, and featurized via pretrained speech models (Wav2Vec2.0, HuBERT), then mapped to rig controls or 3D landmarks (Liang et al., 29 Apr 2024, Wang et al., 7 Oct 2024, Daněček et al., 2023).
- Physiological biometrics: Biosignal features (breath, HRV, EEG bands, eye movement velocity) are extracted, normalized, and temporally smoothed for stability (Nasri et al., 15 Dec 2025, Ortoleva et al., 16 May 2024).
- Natural language: User narratives are parsed by LLMs for discrete emotion tokens and numerical intensities (Wu et al., 26 Sep 2025); dream text is segmented into entities, sentiment, and social valence (Liu et al., 13 Feb 2025).
- Annotation: Emotion labels are typically discrete (neutral, angry, sad, happy, surprise, etc.) or (in advanced cases) continuous (valence, arousal; Russell’s circumplex (Liu et al., 13 Feb 2025)).
Emotion representations feed into downstream embedding layers, with parametric control (learnable lookup tables, embedding matrices, or neural feature modulation) facilitating fine-grained, run-time adjustment of emotional output parameters.
4. Mapping Emotional Representation to 3D Geometry and Animation
Core to the construction of 3D emotional artifacts is the explicit, often mathematically parameterized, mapping of emotional information into geometry, color, deformation, or kinematics. Examples include:
- Facial animation: Neural decoders regress 3D rig coefficients, dense mesh offsets, or landmark deformations as a function of speech and emotion embeddings. Control parameters are fused into each layer, enabling dynamic expressivity and user override (Liang et al., 29 Apr 2024, Nocentini et al., 19 Mar 2024, Wang et al., 7 Oct 2024, Daněček et al., 2023).
- Geometry mapping (biometric/art therapy/data sculpture): PSD-derived EEG band power or normalized biosignal features are mapped to morphometric and material parameters: height, curvature, thickness, transparency, and color. For example,
(Ortoleva et al., 16 May 2024)
- Token-to-geometry: User-selected or LLM-derived emotions are linked via user-tuned affine or nonlinear functions to object parameters such as “surfaceDistort,” “numberOfWaves,” or “globalFrequency” (Wu et al., 26 Sep 2025).
- Dream artifacts: Valence-arousal coordinates modulate point cloud color, particle dynamics, opacity, and size per:
This mapping can be deterministic and interpretable (e.g., in data sculptures or personalized design tools), or learned by neural networks subject to explicit loss terms on expressivity, smoothness, or correlation with ground-truth labels.
5. Evaluation Metrics and User Studies
Evaluation of 3D emotional artifacts employs both technical metrics and situated user studies.
- Technical/quantitative: Metrics focus on geometric accuracy (Lip Vertex Error, Emotional Vertex Error, Max Vertex Error, Landmark L2), perceptual fidelity, and, for rendered videos, SSIM, PSNR, and FID (Liang et al., 29 Apr 2024, Wang et al., 7 Oct 2024, Nocentini et al., 19 Mar 2024).
- Usability and affective engagement: Scales include the Multidimensional Assessment of Interoceptive Awareness (MAIA), Levels of Emotional Awareness Scale (LEAS), System Usability Scale (SUS), NASA-TLX workload, and user preference Likert ratings (Nasri et al., 15 Dec 2025, Wu et al., 26 Sep 2025).
- User reflections and self-discovery: Thematic analysis of qualitative interviews assesses dimensions such as emotional engagement, memory recall, reflection, and behavioral intention. Crucially, 3D artifacts provoke higher self-discovery and embodied responses compared to 2D analogues (Ortoleva et al., 16 May 2024, Nasri et al., 15 Dec 2025).
- Sensitivity to real-time feedback: Systems impose safety thresholds, signal smoothing, or progressive disclosure strategies to prevent overwhelming users, particularly in therapeutic contexts (Nasri et al., 15 Dec 2025).
6. Limitations, Open Questions, and Future Directions
Identified limitations and future research foci include:
- Dataset generality: Many studies use single-actor or small-N data, limiting generalization across identity, gender, or cultural affective styles (Liang et al., 29 Apr 2024, Daněček et al., 2023).
- Expressivity resolution: Coarse or categorical emotion models may not capture the spectrum of human affect. Fine-grained (dimensional) or continuous labeling remains underexplored.
- Temporal coherence: Many pipelines apply framewise or local smoothness, but do not model long-range affective or behavioral dynamics (Liang et al., 29 Apr 2024, Daněček et al., 2023).
- Interpretability and metaphor: Subject-applied mappings (especially in physicalization) highlight tensions between automation and meaning, metrics and metaphor (Wu et al., 26 Sep 2025).
- Material constraints: Physical fabrications are often limited by static plastics, omitting haptics, interactive feedback, or group/social contexts (Ortoleva et al., 16 May 2024).
- Therapeutic translation: Open challenges include quantifying long-term impact of emotional artifact engagement, supporting trauma-informed workflows, and scaling up multi-modal and social affordances (Nasri et al., 15 Dec 2025, Liu et al., 13 Feb 2025).
Proposed directions include adversarial/perceptual training for realism, transformer-based sequence models for temporal context, and integration of haptic or multimodal feedback. There is active interest in continuous affect models, adaptive and socially shared emotional artifacts, and longitudinal studies of behavioral change.
7. Representative Systems: Comparative Overview
Below is a condensed structural summary of key systems drawn from recent literature.
| System | Signal Input | Emotional Encoding | 3D Mapping | Evaluation | Notable Features | Reference |
|---|---|---|---|---|---|---|
| CSTalk | Speech waveform | 5-class, embedding | 185 MetaHuman rigs | LVE/EVE | Correlation transformer | (Liang et al., 29 Apr 2024) |
| PhEmotion | Language narrative | Token+intensity (LLM) | Parametric 3D shape | User study | Manual/AI token mapping | (Wu et al., 26 Sep 2025) |
| Tangible Intangibles | Breath, HRV, eye move | Normalized biosignal | Unity, spatial params | MAIA/LEAS/SUS | Trauma-informed, MR art | (Nasri et al., 15 Dec 2025) |
| EmoGene | Audio+emotion label | 8-class, embedding | VAE→landmarks→NeRF | SSIM, MOS | 3-stage, FiLM modulation | (Wang et al., 7 Oct 2024) |
| Mood spRing | Speech/text | Pleasantness | Procedural 3D seasons | Gal. feedback | Fairness-aware fusion | (Marhamati et al., 2023) |
| DreamLLM-3D | Dream transcription | Valence/arousal, entity | Point-E clouds, Unity | User study | LLM+diffusion anim/soundscape | (Liu et al., 13 Feb 2025) |
| Data Sculpture | EEG, heart rate | Relative EEG powers | Height, curvature, etc | PANAS, qual. | Tangible, explorative | (Ortoleva et al., 16 May 2024) |
These systems collectively establish 3D emotional artifacts as a convergence point for affect modeling, procedural and parametric geometry, interactive art, and computational fabrication.