Papers
Topics
Authors
Recent
2000 character limit reached

Explainable AI for the Arts

Updated 20 November 2025
  • XAIxArts is a field that combines AI transparency with creative practice, integrating technical, artistic, and participatory methods.
  • It employs multimodal explanation mediums such as visual, auditory, and tactile cues to render AI models legible and artistically engaging.
  • Methodologies include interactive latent space editing and embodied interactions, enabling co-creative processes and enriched artistic agency.

Explainable AI for the Arts (XAIxArts) denotes a rapidly developing domain at the intersection of artificial intelligence, human–computer interaction, and artistic practice, framed by the unique demands of creative processes. While traditional Explainable AI (XAI) has prioritized interpretability, transparency, and trust in critical societal domains through mechanistic and technocentric explanations, XAIxArts extends and redefines these objectives. Here, explainability is not limited to computational transparency but encompasses artistic, participatory, and embodied modes of making AI systems legible, accountable, and co-creative. XAIxArts thus invites artists, designers, technologists, and theorists into a shared epistemic arena where documentation, sense-making, glitch, and narrative function as both critique and invention (Bryan-Kinns et al., 28 Feb 2025).

1. Foundations and Key Paradigms

XAIxArts arises from the limitations of technocentric XAI paradigms. Mainstream XAI emphasizes statistical feature importances, saliency maps, and post-hoc rationales directed at specialized end-users (e.g., clinicians, auditors), which often fail to capture the situated, sensorial, and multidimensional nature of creative practice. Instead, XAIxArts reconceptualizes explainability as “everything that makes machine learning models transparent and understandable, also including information about the data, performance, etc.” as an artistic and technical enterprise (Bryan-Kinns et al., 28 Feb 2025).

Distinguishing principles include:

  • Artistic Mediation over Black-Box Transparency: Artistic practices render model internals tangible through embodied experience, performance, and creative translation, rather than technical reporting (Hemment et al., 2019, Hemment et al., 2023).
  • Sense-Making vs. Ornamentation: XAI in the arts questions the epistemic tradition of explanation as justification and offers “sense-making” as an alternative—an expansive process where explanation is distributed, negotiated, and participatory rather than evaluative or ornamental (Arora et al., 2023).
  • Agency and Legibility: Emphasis shifts to increasing artists' agency to intervene in AI models and legibility—users' comprehension of systems’ mechanics and values—rather than purely providing post-hoc explanations (Hemment et al., 2023).

2. Conceptual Frameworks and Theoretical Groundings

Multiple theoretical frameworks underpin XAIxArts:

  • Explanatory Pragmatism: Explainability is a context-sensitive process, where explanations are tailored to the specifics of audience and artistic context. This is formalized as a mapping E:C×M×AEE: C \times M \times A \to \mathcal{E}, where CC is context, MM is model state, AA is audience profile, and E\mathcal{E} is the set of candidate explanations. Utility U(e;a)U(e;a) is optimized through feedback-driven refinement to maximize explanation alignment with audience needs (Privato et al., 2023).
  • Experiential AI: Here, explanation is operationalized through the triplet: Data \to AI Algorithm \to Artistic Mediation \to Human Understanding, emphasizing sensory translation and co-creation (Hemment et al., 2019, Hemment et al., 2023).
  • Craft-Based Reflection-in-Action: Drawing on Schön, artists probe and “bend” models through iterative, hands-on manipulation, fostering tacit understanding rather than static comprehension (Abuzuraiq et al., 10 Aug 2025).

These paradigms underscore a reorientation of explainability towards participatory, multimodal, and critical engagements, assembling a foundation distinct from technocentric XAI.

3. Methodologies, Architectures, and Modalities

3.1 Multimodal Explanation Mediums

XAIxArts expands explanation modalities beyond the textual, visual, and numerical to include auditory (sonification), tactile (haptic), and even olfactory/gustatory channels, enabling richer mapping of AI internal states to creative processes (Clemens, 2023). A three-dimensional design space guides XAIxArts designers:

  • Interaction Paradigm: Autonomous systems, creativity support tools (CSTs), co-creative systems.
  • User Characteristics: Gradient of AI literacy and domain expertise.
  • Primary Artistic Modality: Visual, audio, tactile/haptic, olfactory, gustatory.

Medium selection is task- and user-dependent; for instance, tactile cues may be appropriate for sculptural practice, while sonification benefits music composition and performance.

3.2 System Architectures

Architectural typologies in XAIxArts include:

  • Interactive Latent Space Editing: VAEs and diffusion models in music, sound, and image generation are made explainable by surfacing latent variables with artist-controlled interfaces, regularization schemes, and visualization overlays (Bryan-Kinns et al., 2023, Tecks et al., 21 Jul 2024, Abuzuraiq et al., 10 Aug 2025).
  • Fuzzy Rule-Based Model Explanations: Deep models for art image classification are rendered interpretable by mapping visual traits to deep features through a fuzzy-rule classifier, enabling human-readable synopses of model reasoning and improvements in generalization (Fumanal-Idocin et al., 2023).
  • User-Centric AIGC Pipelines: In generative art tools, XAI techniques such as local surrogate models (e.g., LIME), SHAP, and gradient-based sensitivity analysis guide both prompt input and feedback stages, enabling actionable and contextual prompt refinement (Yu et al., 2023).

3.3 Embodiment and Agency

Embodied interaction—mapping bodily gestures or sensor data directly to model latent spaces—functions as both a transparency mechanism and an artistic language, exemplified in interactive neural audio synthesis and performance (Wilson et al., 18 Oct 2024). Artists and performers gain agency by being able to “feel” and steer the generative process through movement.

4. Case Studies and Practice-Based Investigations

The XAIxArts community has produced a rich ecosystem of practice-based investigations. Exemplary projects include:

Project/Method Medium/Domain Explainability Mechanism/Focus
“Embodied Exploration of Latent Spaces” (Wilson et al., 18 Oct 2024, Bryan-Kinns et al., 28 Feb 2025) Dance, Neural Audio E-textile gestures probe VAE latent space
ARTxAI (Fumanal-Idocin et al., 2023) Artistic Image Classification Fuzzy rule system explains visual features
ComfyUI Model Bending (Abuzuraiq et al., 10 Aug 2025) Diffusion for Visual Art Node-based plugins, real-time layer manipulation
MeasureVAE UI (Bryan-Kinns et al., 2023) Generative Music Regularized latent variables surfaced in 2D pads
User-centric AIGC (Yu et al., 2023) Text-to-image generation SHAP/wt. keyword palettes, local surrogates
Experiential AI (Hemment et al., 2023, Hemment et al., 2019) Interactive Installations Latent space sliders, visual and narrative mediation

Further, reflective works such as “Looking Back, Moving Forward” (Lewis, 9 Aug 2024) and “AIxArtist” (Lewis, 2023) detail first-person encounters, highlighting core challenges: transparency of attribution, ethics of prompt usage, and distinguishing inspiration from plagiarism.

5. Evaluation, Metrics, and Open Challenges

Quantitative and qualitative assessment of explainability in the arts presents significant open challenges:

  • Interpretability Metrics: For regularized latent variable models, dimension-attribute disentanglement scores (e.g., correlation between a latent variable and a musical or visual feature) are used (Bryan-Kinns et al., 2023, Tecks et al., 21 Jul 2024).
  • Fuzzy Rule Quality: Metrics such as dominance score (dsr=srcrds_r = s_r \cdot c_r), support, and confidence are deployed for rule-based explanations (Fumanal-Idocin et al., 2023).
  • Legibility and Agency: Mutual information between artist’s mental model and system behavior, and the artist's effective control space, are proposed but not yet standardized (Hemment et al., 2023).
  • Context and Audience Adaptation: Explanations are evaluated via feedback loops optimizing for recipient clarity, relevance, and practical depth (Privato et al., 2023).

Notably, XAIxArts emphasizes experiential engagement, generativity, and critical reflection as alternative or complementary axes to fidelity or computational metrics (Bryan-Kinns et al., 28 Feb 2025). There remains little formal user paper validation; the field calls for mixed-method and participatory evaluation frameworks, as well as the development of standardized interpretability protocols for embodied and multimodal creative workflows.

6. Ethics, Attribution, and Sociotechnical Considerations

XAIxArts is critically engaged with questions of authorship, transparency, and ethics:

  • Attribution Transparency: Standardized attribution logs and metadata (listing AI model, version, prompts) are advocated for artworks embedding AI processes (Lewis, 9 Aug 2024, Lewis, 2023).
  • Ethics of Asking and Plagiarism: Community frameworks distinguish legitimate inspiration from unacknowledged copying and encourage protocols for ethical prompt design (Lewis, 9 Aug 2024).
  • Empowerment, Inclusion, and Fairness: XAIxArts foregrounds equitable access and the mitigation of bias through co-design, open tools, and community engagement, especially with marginalized or underrepresented groups (Bryan-Kinns et al., 28 Feb 2025).
  • Openness and Hacking: The embrace of hacking, glitch, and failure as both critique and generative resource destabilizes notions of AI perfection and supports pluralistic, open experimentation (Bryan-Kinns et al., 28 Feb 2025).

These sociotechnical commitments are articulated in the XAIxArts Manifesto, which calls for living guidelines, collaborative reflection, and the democratization of explainable, modifiable AI infrastructure (Bryan-Kinns et al., 28 Feb 2025).

7. Future Directions and Open Questions

Key future research areas and challenges outlined in XAIxArts literature include:

  • Formalization of Sense-Making Networks: Developing technical and methodological toolkits for networked, co-creative explanation as an alternative to post-hoc justification (Arora et al., 2023).
  • Scalable Multisensory Evaluation: Deployment and assessment of olfactory, haptic, and multimodal explanation schemes at scale (Clemens, 2023).
  • Sustained Artistic Practice Toolkits: Embedding explainability throughout data curation, training, inference, and performance, not just as a post-hoc add-on (Tecks et al., 21 Jul 2024).
  • Community Standards and Living Manifestos: Ongoing articulation of living, revisable manifestos and ethical guidelines that can adapt to evolving artistic–technological ecologies (Bryan-Kinns et al., 28 Feb 2025).
  • Interactive Feedback Loops: Integration of participatory feedback mechanisms, context modeling, and adaptive explanation tuning in real-world systems (Privato et al., 2023).
  • Embodied and Situated Explanation: Expanding evaluation protocols and conceptual models for embodied, performative, and situated forms of explanation (beyond textual/visual narratives) (Wilson et al., 18 Oct 2024, Hemment et al., 2023).

The field continues to advocate for cross-disciplinary exchange, artist residencies, educator toolkits, and the principled paper of long-term sociocultural impacts.


Selected References

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Explainable AI for the Arts (XAIxArts).