Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap (2306.01941v2)

Published 2 Jun 2023 in cs.HC, cs.AI, and cs.CY

Abstract: The rise of powerful LLMs brings about tremendous opportunities for innovation but also looming risks for individuals and society at large. We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly. However, a central pillar of responsible AI -- transparency -- is largely missing from the current discourse around LLMs. It is paramount to pursue new approaches to provide transparency for LLMs, and years of research at the intersection of AI and human-computer interaction (HCI) highlight that we must do so with a human-centered perspective: Transparency is fundamentally about supporting appropriate human understanding, and this understanding is sought by different stakeholders with different goals in different contexts. In this new era of LLMs, we must develop and design approaches to transparency by considering the needs of stakeholders in the emerging LLM ecosystem, the novel types of LLM-infused applications being built, and the new usage patterns and challenges around LLMs, all while building on lessons learned about how people process, interact with, and make use of information. We reflect on the unique challenges that arise in providing transparency for LLMs, along with lessons learned from HCI and responsible AI research that has taken a human-centered perspective on AI transparency. We then lay out four common approaches that the community has taken to achieve transparency -- model reporting, publishing evaluation results, providing explanations, and communicating uncertainty -- and call out open questions around how these approaches may or may not be applied to LLMs. We hope this provides a starting point for discussion and a useful roadmap for future research.

Human-Centered Approaches to AI Transparency in the Age of LLMs

The research paper "AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap" by Q. Vera Liao and Jennifer Wortman Vaughan explores the complexities surrounding the transparency of LLMs. This work does not only offer a reflective overview of the current challenges and learnings from adjacent fields but also provides a structured research roadmap for improving AI transparency practices, particularly in the context of human stakeholders.

Key Challenges in LLM Transparency

The paper highlights several intrinsic challenges posed by LLMs. These challenges arise primarily due to their sophisticated architectures and vast, opaque training datasets, leading to unpredictable capabilities and behaviors which complicate transparency efforts. LLMs are also proprietary in many instances, hindering efforts towards openness about their functioning. Furthermore, the deployment of LLMs in multifaceted applications necessitates varied transparency approaches according to different stakeholder needs, complicating the issue further.

Insights from HCI and Responsible AI Research

Drawing from years of research in Human-Computer Interaction (HCI), the authors emphasize a shift towards a human-centered approach to AI transparency. They propose that transparency is not aimed solely at increasing understanding but at achieving varied goals such as enhancing decision-making, trust calibration, and system enhancement. This requires not only provisioning suitable information but also ensuring it is tailored to the cognitive and contextual processes of diverse stakeholders.

Notably, the paper points out the intertwined nature of transparency and control, advocating for systems where stakeholders can engage with AI models meaningfully. The researchers also underscore the pitfalls of incomplete or obscured transparency and advocate for systems that not only provide necessary information but also ensure accountability and contested engagement.

Reviewing Existing Transparency Approaches

  1. Model Reporting: The paper notes the difficulties of extending existing model documentation practices to LLMs, given the models' general-purpose ambitions and evolving behaviors. The authors call for interactive, dynamic reporting methods that allow stakeholders to engage with models in a tailored manner.
  2. Evaluations: The complexity and adaptability of LLMs make their evaluation a formidable task. The authors propose holistic approaches to evaluations that consider both technical performance and socio-ethical impacts, tailored to the expectations and contexts of different stakeholders.
  3. Explanations: The paper critically examines the challenges of faithfully explaining the outputs of LLMs, especially given their unavoidable black-box nature. The need for new methodologies that go beyond feature attribution to provide more comprehensible explanations is emphasized.
  4. Communication of Uncertainty: Conveying uncertainty in LLM outputs is vital. Various communication strategies—balancing precision and comprehensibility—are discussed, ensuring users receive actionable insights rather than overwhelm.

Implications for Future AI Development

The paper speculates on future developments where transparency research needs to evolve alongside LLM advancements. For practical applications, involving stakeholders from varied backgrounds to ensure inclusivity, accountability, and fair model representation is suggested. Moreover, regulatory interventions might be necessary to enforce more rigorous transparency standards across LLM creators.

In conclusion, the authors advocate for a deeper integration of transparency practices within LLM development and deployment, considering the fast-evolving AI landscape's challenges and opportunities. Their work lays the groundwork for further exploration into not only technical methodologies but also the ethical and practical implications of deploying increasingly complex LLMs at scale.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Q. Vera Liao (49 papers)
  2. Jennifer Wortman Vaughan (52 papers)
Citations (112)
Youtube Logo Streamline Icon: https://streamlinehq.com