Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning (2302.04732v1)

Published 9 Feb 2023 in cs.HC

Abstract: Machine learning models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. To detect and mitigate such failures, practitioners run behavioral evaluation of their models, checking model outputs for specific types of inputs. Behavioral evaluation is important but challenging, requiring that practitioners discover real-world patterns and validate systematic failures. We conducted 18 semi-structured interviews with ML practitioners to better understand the challenges of behavioral evaluation and found that it is a collaborative, use-case-first process that is not adequately supported by existing task- and domain-specific tools. Using these findings, we designed Zeno, a general-purpose framework for visualizing and testing AI systems across diverse use cases. In four case studies with participants using Zeno on real-world models, we found that practitioners were able to reproduce previous manual analyses and discover new systematic failures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ángel Alexander Cabrera (11 papers)
  2. Erica Fu (2 papers)
  3. Donald Bertucci (4 papers)
  4. Kenneth Holstein (37 papers)
  5. Ameet Talwalkar (89 papers)
  6. Jason I. Hong (17 papers)
  7. Adam Perer (29 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.