Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supporting Sensemaking of Large Language Model Outputs at Scale (2401.13726v1)

Published 24 Jan 2024 in cs.HC and cs.LG

Abstract: LLMs are capable of generating multiple responses to a single prompt, yet little effort has been expended to help end-users or system designers make use of this capability. In this paper, we explore how to present many LLM responses at once. We design five features, which include both pre-existing and novel methods for computing similarities and differences across textual documents, as well as how to render their outputs. We report on a controlled user study (n=24) and eight case studies evaluating these features and how they support users in different tasks. We find that the features support a wide variety of sensemaking tasks and even make tasks previously considered to be too difficult by our participants now tractable. Finally, we present design guidelines to inform future explorations of new LLM interfaces.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Katy Ilonka Gero (9 papers)
  2. Chelse Swoopes (5 papers)
  3. Ziwei Gu (3 papers)
  4. Jonathan K. Kummerfeld (38 papers)
  5. Elena L. Glassman (19 papers)
Citations (11)