Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience (2302.10395v1)

Published 21 Feb 2023 in cs.HC and cs.AI

Abstract: Despite the widespread use of AI, designing user experiences (UX) for AI-powered systems remains challenging. UX designers face hurdles understanding AI technologies, such as pre-trained LLMs, as design materials. This limits their ability to ideate and make decisions about whether, where, and how to use AI. To address this problem, we bridge the literature on AI design and AI transparency to explore whether and how frameworks for transparent model reporting can support design ideation with pre-trained models. By interviewing 23 UX practitioners, we find that practitioners frequently work with pre-trained models, but lack support for UX-led ideation. Through a scenario-based design task, we identify common goals that designers seek model understanding for and pinpoint their model transparency information needs. Our study highlights the pivotal role that UX designers can play in Responsible AI and calls for supporting their understanding of AI limitations through model transparency and interrogation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Q. Vera Liao (49 papers)
  2. Hariharan Subramonyam (13 papers)
  3. Jennifer Wang (14 papers)
  4. Jennifer Wortman Vaughan (52 papers)
Citations (48)

Summary

We haven't generated a summary for this paper yet.