Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Users are the North Star for AI Transparency (2303.05500v1)

Published 9 Mar 2023 in cs.CY, cs.AI, and cs.HC

Abstract: Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research. Consequently, stakeholders often talk past each other, with policymakers expressing vague demands and practitioners devising solutions that may not address the underlying concerns. Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work. We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest. We conduct a broad literature survey, identifying many clusters of similar conceptions of transparency, tying each back to our north star with analysis of how it furthers or hinders our ideal AI transparency goals. We conclude with a discussion on common threads across all the clusters, to provide clearer common language whereby policymakers, stakeholders, and practitioners can communicate concrete demands and deliver appropriate solutions. We hope for future work on AI transparency that further advances confident, user-beneficial goals and provides clarity to regulators and developers alike.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alex Mei (6 papers)
  2. Michael Saxon (27 papers)
  3. Shiyu Chang (120 papers)
  4. William Yang Wang (254 papers)
  5. Zachary C. Lipton (137 papers)
Citations (8)

Summary

A Comprehensive Survey of AI Transparency through a User-Centered Framework

This paper attempts to dissect the multifaceted notion of AI transparency by addressing its commonly misunderstood and vague conceptualization. Acknowledging the broader social demands and regulatory frameworks calling for transparency in AI systems, the paper identifies a critical need for clarity in how transparency is defined and applied across different stakeholder groups. The authors propose a user-centered framework as a solution, intended to align diverse interests with specific conceptualizations of transparency.

Core Argument and Methodology

The authors argue that the current discourse on AI transparency is hampered by a lack of precise terminology and clear objectives, leading to fragmented interpretations. To address this, they introduce the notion of a “North Star” for AI transparency—transparency that is user-centered, user-appropriate, and honest. This ideal aims to unify disparate research efforts and align them with practical and policy-oriented goals.

To substantiate their claims, the paper conducts an extensive literature survey, analyzing how different facets of transparency have been conceptualized. Key areas of focus include:

  • Data-Related Transparency: The paper explores transparency from the perspectives of data collection, processing, and training data documentation. The authors stress the importance of both record and data-provisioning transparency as a means to facilitate understanding and comprehensive communication of data-related constraints and potential biases.
  • System-Centered Transparency: Focusing on the internal mechanics of AI systems, the survey highlights the importance of explainability and system disclosure. While discussing methods like attention mechanisms and influence functions, the paper critiques non-causal explanations for potentially misleading users regarding system functionality.
  • Output-Oriented Transparency: This section connects transparency with performance evaluation, reproducibility, and fairness in AI systems. The authors pinpoint the need for standards and norms that encourage accurate reproducibility and hold systems accountable for fairness across diverse demographic groups.

Implications and Future Directions

The implications of this research are manifold. For practitioners, aligning transparency initiatives with a user-centered framework offers a pathway to develop AI systems that better meet societal demands for accountability and fairness. For policymakers, the insights provided by this survey offer a foundational guide to crafting clearer regulatory guidelines that account for the nuanced definitions of transparency.

The paper advocates for further research geared toward refining and operationalizing the proposed user-centered transparency framework. Future developments should address the technical challenges of achieving causal explanations and work on regulatory standards that mitigate risks associated with deceptive transparency practices.

Conclusion

In conclusion, this paper contributes to the field by elucidating the multifarious uses and interpretations of transparency in AI systems through the lens of a coherent user-centered framework. By connecting disparate threads of research under this framework, the paper lays a solid foundation for future work aimed at achieving transparency that is truly beneficial to users, regulators, and developers alike. The survey encapsulates a call to action for more precise language and alignment in transparency research, with the ultimate goal of empowering stakeholders to trust AI systems based on clear, truthful, and meaningful information.

Youtube Logo Streamline Icon: https://streamlinehq.com