A Comprehensive Survey of AI Transparency through a User-Centered Framework
This paper attempts to dissect the multifaceted notion of AI transparency by addressing its commonly misunderstood and vague conceptualization. Acknowledging the broader social demands and regulatory frameworks calling for transparency in AI systems, the paper identifies a critical need for clarity in how transparency is defined and applied across different stakeholder groups. The authors propose a user-centered framework as a solution, intended to align diverse interests with specific conceptualizations of transparency.
Core Argument and Methodology
The authors argue that the current discourse on AI transparency is hampered by a lack of precise terminology and clear objectives, leading to fragmented interpretations. To address this, they introduce the notion of a “North Star” for AI transparency—transparency that is user-centered, user-appropriate, and honest. This ideal aims to unify disparate research efforts and align them with practical and policy-oriented goals.
To substantiate their claims, the paper conducts an extensive literature survey, analyzing how different facets of transparency have been conceptualized. Key areas of focus include:
- Data-Related Transparency: The paper explores transparency from the perspectives of data collection, processing, and training data documentation. The authors stress the importance of both record and data-provisioning transparency as a means to facilitate understanding and comprehensive communication of data-related constraints and potential biases.
- System-Centered Transparency: Focusing on the internal mechanics of AI systems, the survey highlights the importance of explainability and system disclosure. While discussing methods like attention mechanisms and influence functions, the paper critiques non-causal explanations for potentially misleading users regarding system functionality.
- Output-Oriented Transparency: This section connects transparency with performance evaluation, reproducibility, and fairness in AI systems. The authors pinpoint the need for standards and norms that encourage accurate reproducibility and hold systems accountable for fairness across diverse demographic groups.
Implications and Future Directions
The implications of this research are manifold. For practitioners, aligning transparency initiatives with a user-centered framework offers a pathway to develop AI systems that better meet societal demands for accountability and fairness. For policymakers, the insights provided by this survey offer a foundational guide to crafting clearer regulatory guidelines that account for the nuanced definitions of transparency.
The paper advocates for further research geared toward refining and operationalizing the proposed user-centered transparency framework. Future developments should address the technical challenges of achieving causal explanations and work on regulatory standards that mitigate risks associated with deceptive transparency practices.
Conclusion
In conclusion, this paper contributes to the field by elucidating the multifarious uses and interpretations of transparency in AI systems through the lens of a coherent user-centered framework. By connecting disparate threads of research under this framework, the paper lays a solid foundation for future work aimed at achieving transparency that is truly beneficial to users, regulators, and developers alike. The survey encapsulates a call to action for more precise language and alignment in transparency research, with the ultimate goal of empowering stakeholders to trust AI systems based on clear, truthful, and meaningful information.