Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Expanding Explainability: Towards Social Transparency in AI systems (2101.04719v1)

Published 12 Jan 2021 in cs.HC and cs.AI

Abstract: As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Upol Ehsan (16 papers)
  2. Q. Vera Liao (49 papers)
  3. Michael Muller (70 papers)
  4. Mark O. Riedl (57 papers)
  5. Justin D. Weisz (26 papers)
Citations (310)

Summary

  • The paper introduces Social Transparency (ST) to extend traditional XAI by incorporating socio-organizational context into AI explanations.
  • The paper employs a scenario-based design using ‘4W’ information in a sales context to provide contextual insights for improved trust.
  • The paper demonstrates that integrating technological, decision-making, and organizational contexts enhances trust calibration and decision resilience.

Expanding Explainability: Towards Social Transparency in AI Systems

The paper "Expanding Explainability: Towards Social Transparency in AI Systems" provides a critical examination of the current landscape of Explainable AI (XAI) and introduces the novel concept of Social Transparency (ST) as a means of embedding socio-organizational context into AI explanations. This work is anchored in the assertion that traditional XAI approaches have predominantly been algorithm-centric, and thus overlook the socially-situated nature of AI systems that are deeply embedded in socio-organizational fabrics. The proposed ST approach aims to bridge these gaps by incorporating the broader social, decision-making, and organizational contexts, which are critical for holistic understanding and trust in AI systems.

The research utilizes a scenario-based design (SBD) to explore the concept of ST in AI-mediated decision-making systems, with a specific application to a sales context. The central idea is to provide "4W" information: who interacted with the AI system, what decisions were made, when these interactions occurred, and the rationale behind these decisions. This approach, they argue, provides users with crew knowledge that includes tacit and context-specific insights critical for making informed decisions.

A critical takeaway from the research is that technical transparency alone is insufficient for effective decision-making. AI systems often lack the capacity to capture contextual nuances, such as social dynamics, client-specific requirements, or emerging environmental factors like global pandemics. The inclusion of ST encourages users to understand AI recommendations within the broader tapestry of human interactions and decisions, thus enhancing decision confidence and fostering more nuanced AI trust calibration.

The paper identifies three levels of context made visible by ST:

  • Technological Context: By tracing the trajectory of AI's past decision outputs alongside human interactions, users gain insights into the AI's performance, allowing them to better calibrate trust and integrate human judgment in algorithmic processes.
  • Decision-making Context: ST supports users in accessing local decision-related contexts, thus facilitating social validation and enabling decision resilience against over-reliance on AI outputs. It emphasizes the actionable insights that can emerge from analogical reasoning with similar past decisions.
  • Organizational Context: ST enhances understanding of organizational norms, aiding in setting job expectations, supporting accountability, and fostering meta-knowledge necessary for effective expert location and the development of Transactive Memory Systems (TMS).

Quantitatively, the research demonstrated the practicality of ST in a sales scenario where participants, after engaging with ST elements, adjusted their pricing strategies and shared a heightened confidence in their decisions. The empirical findings indicated an improvement in trust calibration and an increase in perceived decision support quality.

The implications of this research are multifaceted. Theoretically, it broadens the scope of XAI by moving beyond algorithmic transparency to embrace a more holistic, socially-situated AI transparency model. Practically, it proposes a set of design considerations for AI systems that aim to enhance human-AI collaboration by enriching explanations with socio-organizational context. However, the research also recognizes potential risks such as privacy concerns, biases, and potential cognitive overload, indicating areas for future exploration and refinement in socially-situated XAI systems.

In conclusion, the proposed ST framework represents a significant step towards integrating socio-organizational context within AI explanations. By demonstrating how social transparency can impact trust and decision-making, the paper sets the stage for future advancements in the field of XAI, advocating for a nuanced understanding that leverages both technical and social dimensions to facilitate effective AI system interaction and adoption.

Youtube Logo Streamline Icon: https://streamlinehq.com