- The paper introduces Social Transparency (ST) to extend traditional XAI by incorporating socio-organizational context into AI explanations.
- The paper employs a scenario-based design using ‘4W’ information in a sales context to provide contextual insights for improved trust.
- The paper demonstrates that integrating technological, decision-making, and organizational contexts enhances trust calibration and decision resilience.
Expanding Explainability: Towards Social Transparency in AI Systems
The paper "Expanding Explainability: Towards Social Transparency in AI Systems" provides a critical examination of the current landscape of Explainable AI (XAI) and introduces the novel concept of Social Transparency (ST) as a means of embedding socio-organizational context into AI explanations. This work is anchored in the assertion that traditional XAI approaches have predominantly been algorithm-centric, and thus overlook the socially-situated nature of AI systems that are deeply embedded in socio-organizational fabrics. The proposed ST approach aims to bridge these gaps by incorporating the broader social, decision-making, and organizational contexts, which are critical for holistic understanding and trust in AI systems.
The research utilizes a scenario-based design (SBD) to explore the concept of ST in AI-mediated decision-making systems, with a specific application to a sales context. The central idea is to provide "4W" information: who interacted with the AI system, what decisions were made, when these interactions occurred, and the rationale behind these decisions. This approach, they argue, provides users with crew knowledge that includes tacit and context-specific insights critical for making informed decisions.
A critical takeaway from the research is that technical transparency alone is insufficient for effective decision-making. AI systems often lack the capacity to capture contextual nuances, such as social dynamics, client-specific requirements, or emerging environmental factors like global pandemics. The inclusion of ST encourages users to understand AI recommendations within the broader tapestry of human interactions and decisions, thus enhancing decision confidence and fostering more nuanced AI trust calibration.
The paper identifies three levels of context made visible by ST:
- Technological Context: By tracing the trajectory of AI's past decision outputs alongside human interactions, users gain insights into the AI's performance, allowing them to better calibrate trust and integrate human judgment in algorithmic processes.
- Decision-making Context: ST supports users in accessing local decision-related contexts, thus facilitating social validation and enabling decision resilience against over-reliance on AI outputs. It emphasizes the actionable insights that can emerge from analogical reasoning with similar past decisions.
- Organizational Context: ST enhances understanding of organizational norms, aiding in setting job expectations, supporting accountability, and fostering meta-knowledge necessary for effective expert location and the development of Transactive Memory Systems (TMS).
Quantitatively, the research demonstrated the practicality of ST in a sales scenario where participants, after engaging with ST elements, adjusted their pricing strategies and shared a heightened confidence in their decisions. The empirical findings indicated an improvement in trust calibration and an increase in perceived decision support quality.
The implications of this research are multifaceted. Theoretically, it broadens the scope of XAI by moving beyond algorithmic transparency to embrace a more holistic, socially-situated AI transparency model. Practically, it proposes a set of design considerations for AI systems that aim to enhance human-AI collaboration by enriching explanations with socio-organizational context. However, the research also recognizes potential risks such as privacy concerns, biases, and potential cognitive overload, indicating areas for future exploration and refinement in socially-situated XAI systems.
In conclusion, the proposed ST framework represents a significant step towards integrating socio-organizational context within AI explanations. By demonstrating how social transparency can impact trust and decision-making, the paper sets the stage for future advancements in the field of XAI, advocating for a nuanced understanding that leverages both technical and social dimensions to facilitate effective AI system interaction and adoption.