Computational Civic Storytelling
- Computational civic storytelling is an interdisciplinary field that fuses AI, machine learning, and HCI with narrative theory to enhance civic engagement.
- Systems employ real-time affective sensing, modular data pipelines, and human–AI co-creation to dynamically personalize and synthesize civic narratives.
- Applications span civic education, participatory decision-making, and cultural heritage, demonstrating measurable improvements in emotional engagement and trust.
Computational civic storytelling denotes the interdisciplinary field and suite of technologies applying AI, ML, and human–computer interaction (HCI) methodologies to generate, adapt, synthesize, or analyze narratives serving civic purposes. Such purposes include promoting civic education, facilitating participatory decision-making, mobilizing collective action, and representing diverse lived experiences within communities. This area draws on cognitive and social psychology (especially political psychology), narrative theory, affective computing, and algorithmic design, leveraging both generative and analytical AI systems to advance communicative functions fundamental to democratic societies (Wegemer et al., 30 Jun 2025, Overney et al., 23 Sep 2025, He et al., 31 Dec 2024, Poole-Dayan et al., 17 Nov 2025).
1. Theoretical Foundations: Civic Narratives and Computational Mediation
Computational civic storytelling operationalizes core concepts from political psychology, narratology, and participatory theory to inform the design and evaluation of narrative-centric interventions. Political psychology introduces constructs such as affective polarization—where identity-based emotions supersede substantive policy disagreements—and social identity theory, which predicts defensive reactions when civic narratives challenge in-group identities. Intergroup contact theory prescribes structured storytelling as a vehicle for reducing out-group animosity (Wegemer et al., 30 Jun 2025).
Narratology models narrative persuasion via three primary mechanisms: transportation (deep immersive engagement), character identification (empathic alignment with story actors), and interaction with a storyteller (parasocial engagement or dialogic scaffolding). Computational systems exploit these mechanisms by dynamically personalizing narrative elements—linguistic tone, demographic attributes, dialogic engagement—based on real-time user state (Wegemer et al., 30 Jun 2025). Civic storytelling is also situated at the intersection of official discourse and local, non-authorized narratives, acknowledging the importance of both structured public narratives (e.g., public leadership stories) and emergent, community-anchored accounts (Poole-Dayan et al., 17 Nov 2025, He et al., 31 Dec 2024).
2. System Architectures and Computational Pipelines
Architectures for computational civic storytelling are typically modular, integrating sensory input, narrative representation, and affect-adaptive or participatory generation. The AI-mediated Digital Civic Storytelling (AI-DCS) platform exemplifies this approach, orchestrating facial emotion recognition (TensorFlow CNN on RAF-DB at 10 fps), attention tracking (OpenCV Haar cascades), a beat-segmented narrative engine, and GPT-4-driven language adaptation. These components communicate via a data pipeline that aggregates affective and attentional signals, triggers segment-level adaptation, and manages bidirectional human–AI dialog through naturalistic audio/video channels (WebRTC) (Wegemer et al., 30 Jun 2025).
StoryBuilder offers a complementary human–AI pipeline for aggregating and synthesizing large volumes of civic feedback: (1) text segmentation with LLMs (GPT-4o-mini), (2) assisted theme creation, (3) multi-pass consensus theme classification, (4) automated narrative synthesis from clustered quotes (Claude 3.5, with human-moderated constraints), and (5) multi-layered human review for citation accuracy and thematic coherence (Overney et al., 23 Sep 2025).
Visual-narrative systems for cultural heritage employ iterative human–AI co-creation loops, using generative models such as Stable Diffusion. Participants provide domain-anchored prompts and curate outputs through cycles of human feedback and prompt refinement (He et al., 31 Dec 2024).
Example: Narrative Adaptation Pseudocode (Wegemer et al., 30 Jun 2025)
1 2 3 4 5 6 7 8 9 10 11 12 13 |
initialize n = 1 while n ≤ N: deliver = rewrite_with_GPT4(s_n_base, context, adapt_flag) TTS(deliver) record learner's E_n, A_n over this segment if aligned_n: n ← n+1; adapt_flag = false else if retry_count < 3: adapt_flag = true; retry_count++ else: perform_interactive_checkin() retry_count = 0 end |
3. Narrative Representation, Adaptation, and Analysis
Pre-structured story outlines anchor many systems, in which each beat (segment) is annotated with an expected primary emotion and base linguistic template. Adaptation layers leverage affective and attentional feedback to rewrite story segments in real time, using LLM prompts that optimize for emotional alignment and sustained attention (Wegemer et al., 30 Jun 2025). Personalization loss and engagement gain quantify adaptation efficacy across beats.
Human–AI synthesis approaches (e.g., StoryBuilder) perform quote selection and theme-based clustering using LLM ensemble voting, constructing first-person composite narratives with explicit citations to source material. The process omits probabilistic mixture models, instead relying on LLM outputs and human curation for thematic fidelity and validity (Overney et al., 23 Sep 2025). For cultural heritage, narrative spectrum models formalize participant stance transitions between "realistic," "memorable," and "exploratory" narrative strategies (He et al., 31 Dec 2024).
For analytic use-cases, computational frameworks can automate qualitative annotation using LLMs guided by expert codebooks. Public narrative components—Story of Self, Us, and Now, with their Challenge–Choice–Outcome arcs and further content codes (Hope, Vulnerability, Urgency, etc.)—are detected at the sentence level, achieving macro-F1 ≈ 0.75–0.80 against expert annotators (Poole-Dayan et al., 17 Nov 2025).
4. Applications and Case Studies
Computational civic storytelling is deployed in several contexts:
- Civic Education and Polarization Reduction: AI-DCS was piloted in college classrooms, where adaptive narrative alignment increased emotional engagement (60% to 85% of aligned segments), improved perspective-taking (ΔPT = +0.45, , , ), and reduced polarization (thermometer shift +4.2°, , ) (Wegemer et al., 30 Jun 2025).
- Community Engagement and Policy Deliberation: StoryBuilder synthesized 2,480 community quotes into 124 composite stories, which, when deployed via the StorySharer interface, resulted in increased respect and trust (ANOVA , , scene narratives vs. theme ) (Overney et al., 23 Sep 2025).
- Cultural Heritage Narratives: Co-creation workshops employing Stable Diffusion enabled participants to reconstruct and express local cultural heritage visually, surfacing a range of strategies and exposing both the amplifying power and the limitations (cultural bias, representational gaps) of generative AI (He et al., 31 Dec 2024).
- Leadership and Mobilization Analysis: LLM-automated annotation of public narratives enables large-scale, structure-resolved analysis of civic storytelling across both grassroots and political speech data (Poole-Dayan et al., 17 Nov 2025).
5. Strengths, Limitations, and Evaluation
Evaluation strategies span both qualitative and quantitative paradigms. Emotional engagement, perspective-taking, and attitudinal shifts are assessed via pre/post surveys and standardized indices (e.g., Interpersonal Reactivity Index), with statistical controls. Systems such as AI-DCS and StoryBuilder have demonstrated measurable improvements in engagement metrics but reveal persistent limitations:
- Technical: Facial emotion tracking can be confounded by pose, lighting, and demographic bias; real-time dialogue management can suffer from LLM limitations (off-topic drift, surface-level adaptation) (Wegemer et al., 30 Jun 2025).
- Human–AI Synthesis: Automated story synthesis is vulnerable to citation hallucination, and user trust is contingent on explicit human involvement and transparent boundaries on AI input (Overney et al., 23 Sep 2025).
- Generative Bias: Visual storytelling models often default to over-represented, culturally Western motifs, requiring domain-specific dataset curation and prompt engineering to ensure representational equity (He et al., 31 Dec 2024).
- Annotation Subjectivity: LLM-based annotation underperforms on inferential content (e.g., Dream, Nightmare codes), and domain shifts (e.g., between coached public narratives and political speeches) can degrade accuracy (Poole-Dayan et al., 17 Nov 2025).
| System/Application | Key Metric/Result | Limitation/Challenge |
|---|---|---|
| AI-DCS (Education) | ΔPT=+0.45, 85% alignment | Real-time emotion tracking, LLM drift |
| StoryBuilder | +0.4–0.5 point increase in respect/trust | Citation hallucination, human review load |
| Heritage Co-creation | Narrative stance modeling, prompt iteration | Cultural bias, under-specified details |
| PN Annotation | Macro-F1 ≈ 0.75–0.80 | Subjective code undercall, domain shift |
6. Design Recommendations and Future Directions
Cross-system recommendations to advance authenticity, accessibility, and inclusivity in computational civic storytelling include:
- Detailed AI Error Explanations: Providing users with feedback on AI misinterpretations supports transparent collaboration and capability-aware mental models (He et al., 31 Dec 2024).
- Prompt Engineering Support: Templates and scaffolded refinement interfaces help users operationalize narrative specificity while minimizing representational drift (He et al., 31 Dec 2024).
- Human-in-the-Loop Oversight: RLHF strategies (reward defined on user alignment with factual, aesthetic, or cultural criteria) and participatory review frameworks mitigate bias and optimize generative outcomes (He et al., 31 Dec 2024, Overney et al., 23 Sep 2025).
- Scalability and Dashboarding: Algorithmic literacy can be fostered through adaptation trace logs and educator dashboards, increasing transparency and meta-awareness in educational contexts (Wegemer et al., 30 Jun 2025).
- Ecological Validation and Generalization: Future deployments must include participatory, diverse user studies, longitudinal tracking of civic attitudes/behaviors, and transfer of pipelines to additional domains such as municipal planning and public health (Overney et al., 23 Sep 2025).
Further lines of investigation should address multimodal affective sensing (integrating speech and physiological data), improvement of LLM-based annotation for underrepresented codes and contexts, and hybrid narrative-quantitative presentation formats.
7. Illustrative Examples
Two concrete outputs demonstrate system operation:
Adaptive Narrative Segment (AI-DCS, Political Polarization Education) (Wegemer et al., 30 Jun 2025):
“I’m a teacher who’s worked across our district. I’ve walked into some schools with brand-new labs, but other schools barely have enough textbooks. [...] I worry that rezoning might leave us understaffed or underfunded if we don’t account for these disparities. It’s crucial that boundary changes include plans to equalize facilities and support across all schools.”
Composite Community Story (StoryBuilder, School Rezoning) (Overney et al., 23 Sep 2025):
“I am a parent, and I have seen how school boundaries can create unfair situations for students [1]. At my kids’ school [...]. But schools in poorer areas can barely pay for the things they really need [2]. [...] This would help share resources and opportunities more fairly [4].”
These outputs foreground multiple real voices or adapt narrative language to foster trust, empathy, and perspective-taking—core goals of computational civic storytelling.
Computational civic storytelling encompasses an expanding suite of technologies and research methodologies that couple narrative theory, affective computing, AI-driven adaptation, and participatory synthesis. Its systems are evaluated by both behavioral science metrics and direct participant engagement, and require ongoing innovation in algorithmic transparency, cultural inclusivity, and human–AI oversight to realize the field’s civic and democratic potential (Wegemer et al., 30 Jun 2025, Overney et al., 23 Sep 2025, He et al., 31 Dec 2024, Poole-Dayan et al., 17 Nov 2025).