IMAGINE: AI-Mediated Communication Model
- IMAGINE is a conceptual framework that models the closed-loop interaction between AI-driven content creation and real-time user response measurement.
- The model architecture employs three cooperating AI agents—creator, receptor, and negotiator—to dynamically adjust media based on user affect.
- Its formalization enables personalized, real-time optimization of media effects, while raising key methodological, practical, and ethical questions.
The Integrated Model of Artificial Intelligence-Mediated Communication Effects (IMAGINE) is a conceptual and formal framework designed to theorize, model, and empirically investigate the real-time, closed-loop interaction between AI-driven media content creation and AI-based measurement of user responses. Proposed by Frederic Guerrero-Solé, IMAGINE builds on the traditions of media evolution and media effects research, reconfiguring the paradigm of media influence to account for adaptive, algorithmically-driven cycles where content and user reactions are measured and dynamically optimized by cooperating artificial agents. The model generalizes beyond traditional "one-to-many" or "1" media, defining a new regime where media content, reception, and optimization occur in a bidirectional, synchronous digital ecosystem (Guerrero-Sole, 2022).
1. Theoretical Foundations
IMAGINE synthesizes two key theoretical currents in communication research:
a) Media Evolution (Scolari, 2012):
Media technologies are conceptualized as evolving across five ecosystemic dimensions: form, content, use, social context, and normative context. Historical media revolutions (e.g., printing press, radio, smartphone) shifted one or more of these axes. IMAGINE posits a further evolutionary stage where media are instantiated as adaptive, AI-driven systems responsive to real-time user affect and cognition, thus altering both form and use.
b) Media Effects (Potter, 2010):
Media effects are traditionally approached as discrete changes in variables such as knowledge, attitudes, emotions, physiology, and behavior resulting from content exposure. Typically, these effects are measured ex post via surveys or physiological tracking. IMAGINE integrates Potter’s taxonomy directly within its operating loop: effects become both dependent variables and real-time control targets, embedding the feedback dynamics of Bandura’s social cognitive framework within the architecture itself (Guerrero-Sole, 2022).
2. IMAGINE Model Architecture
Central to IMAGINE is a closed-loop control system comprised of three interacting AI agents:
| Agent | Function | Example Modalities |
|---|---|---|
| AA-creator (ac) | Generates/adapts media content (text, image, video, sound) | GANs, diffusion models, LLMs |
| AA-receptor (ar) | Measures real-time user response via sensors/analysis | EEG, fMRI, GSR, facial expression, EMG |
| AA-negotiator (an) | Holds goals, computes adaptation instructions for creator | Utility optimization, control-theoretic |
The real-time loop at step is defined by:
- : AA-receptor acquires a vector of user-state variables (e.g., valence, attention, arousal).
- : AA-negotiator compares to a goal vector and issues adaptation instructions.
- : AA-creator utilizes (and optional exogenous/contextual data) for content synthesis.
The primary objective is to minimize the distance between realized user states and target states , formalized as:
subject to constraints on feasible (Guerrero-Sole, 2022).
System "efficiency" is modeled as , with each agent's intelligence level ranging from 0 (none) to 1 (ideal AI). When , IMAGINE achieves its closed-loop optimum. This abstraction allows taxonomy of historical and current communication regimes, from classical broadcasting ($0,0,0$) to data-driven platforms ($0,0,1$), to pure experimental measurement ($0,1,0$), and forward to AI-driven content with/without feedback ($1,0,0$; $1,1,1$).
3. Formalization and Optimization
IMAGINE’s dynamic can be expressed with:
- : Vectorized content representation at time .
- : AA-receptor's measurement of cognitive/affective variables.
- : Goal vector specifying desired user states.
- : Instruction vector from the negotiator.
Maps are implemented as neural network models or algorithmic mappings, learned from large datasets. The negotiator’s optimization is goal-oriented and subject to real-time environmental feedback, distinguishing IMAGINE’s adaptive quality from batch-processed or one-way approaches. Each media exposure is unique, highly granular, and dynamically modifiable (Guerrero-Sole, 2022).
4. Illustrative Instantiations
Two canonical examples illustrate IMAGINE in operation:
a) Parasocial Interaction (PSI):
Goals are set to maximize parasocial bond metrics (trust, intimacy). The AA-creator uses avatars able to modulate expression, gesture, and prosody. The AA-receptor employs multimodal sensing (facial analysis, EEG, heart rate variability) to track user engagement curves. The negotiator tweaks micro-expressions and dialog to drive engagement toward , enabling a "friend in the box" dynamically optimized for closeness.
b) Real-Time Beautification:
The target is to enhance user-perceived attractiveness/arousal while avoiding the uncanny valley. A GAN or diffusion model subtly alters the user's live video stream (e.g., smoothing, lighting, symmetry). The receptor system tracks facial EMG (e.g., zygomaticus major), eye tracking, and skin response to quantify genuine pleasure. The negotiator continuously adjusts beautification levels to optimize affect—too little yields weak effect, too much triggers unease (Guerrero-Sole, 2022).
5. Empirical Integration: Signal Diagnosticity and Perception
Research on AI-mediation in communication informs the IMAGINE model’s "Receiver Perception" stage. Khadpe et al. (2025) demonstrate that when messages are labeled as AI-assisted, their capacity to convey warmth or coldness about the sender is reliably dampened. The probability that a recipient assigns a particular character trait to the sender—formally expressed as
(where = "warm," = "cold")—shifts toward indeterminacy ($0.5$) under AI labels, regardless of message valence (Khadpe et al., 11 Sep 2025). This reduction in diagnosticity impacts relational trust, compliance, and perceived competence, acting as a form of signal attenuation rather than categorical disqualification. These findings provide a mechanistic basis for how AI mediation modifies not only content production and optimization, but fundamental social inferences within closed-loop communication architectures.
6. Methodological, Practical, and Ethical Implications
IMAGINE’s adaptivity necessitates methodological shifts in media research:
- Real-time, high-frequency physiological and behavioral data supersede traditional self-report and pre/post designs.
- Analytical focus moves from static mean comparisons to dynamical-system and control-theoretic models.
- Media industries require scalable, streaming-oriented AI infrastructure to support ephemeral, highly individualized "just-for-you" content (Guerrero-Sole, 2022).
Application domains include persuasion/marketing (hyper-personalized advertising), health/therapy (on-the-fly VR phobia attenuation or mood elevation), and education (tutoring systems reparsing confusion states).
Ethical concerns are pronounced: the ability to steer user affect and behavior in real time raises issues of transparency, agency, manipulation, and privacy. Who sets the goal vector ? Under what oversight? The closed-loop structure, if improperly constrained, becomes a potent tool for hidden influence and involuntary experimentation.
7. Limitations and Directions for Future Research
Current limitations of IMAGINE include:
- Scientific debate over the validity of AI-driven emotion recognition, especially with context-sensitive displays.
- Non-invasive neural decoding (e.g., fMRI, EEG) remains slow, noisy, and limited outside the lab.
- The model's control-theoretic loop presumes smooth feedback; human affect can be non-stationary, subject to habituation, or adversarial.
- Expansion to multi-user and adversarial scenarios, as well as normative frameworks for control and auditing of goal-setting AIs, remains open (Guerrero-Sole, 2022).
A plausible implication is that robust deployment of IMAGINE systems will critically depend on advances in multi-modal sensing, ethical governance, and a richer understanding of the complex and context-dependent nature of human media response.