Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Standing on FURM ground -- A framework for evaluating Fair, Useful, and Reliable AI Models in healthcare systems (2403.07911v2)

Published 27 Feb 2024 in cs.CY and cs.AI

Abstract: The impact of using AI to guide patient care or operational processes is an interplay of the AI model's output, the decision-making protocol based on that output, and the capacity of the stakeholders involved to take the necessary subsequent action. Estimating the effects of this interplay before deployment, and studying it in real time afterwards, are essential to bridge the chasm between AI model development and achievable benefit. To accomplish this, the Data Science team at Stanford Health Care has developed a Testing and Evaluation (T&E) mechanism to identify fair, useful and reliable AI models (FURM) by conducting an ethical review to identify potential value mismatches, simulations to estimate usefulness, financial projections to assess sustainability, as well as analyses to determine IT feasibility, design a deployment strategy, and recommend a prospective monitoring and evaluation plan. We report on FURM assessments done to evaluate six AI guided solutions for potential adoption, spanning clinical and operational settings, each with the potential to impact from several dozen to tens of thousands of patients each year. We describe the assessment process, summarize the six assessments, and share our framework to enable others to conduct similar assessments. Of the six solutions we assessed, two have moved into a planning and implementation phase. Our novel contributions - usefulness estimates by simulation, financial projections to quantify sustainability, and a process to do ethical assessments - as well as their underlying methods and open source tools, are available for other healthcare systems to conduct actionable evaluations of candidate AI solutions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (24)
  1. Alison Callahan (7 papers)
  2. Duncan McElfresh (5 papers)
  3. Juan M. Banda (17 papers)
  4. Gabrielle Bunney (1 paper)
  5. Danton Char (2 papers)
  6. Jonathan Chen (11 papers)
  7. Conor K. Corbin (4 papers)
  8. Debadutta Dash (3 papers)
  9. Norman L. Downing (1 paper)
  10. Srikar Nallan (1 paper)
  11. Sneha S. Jain (3 papers)
  12. Nikesh Kotecha (4 papers)
  13. Jonathan Masterson (1 paper)
  14. Michelle M. Mello (1 paper)
  15. Keith Morse (3 papers)
  16. Abby Pandya (1 paper)
  17. Anurang Revri (2 papers)
  18. Aditya Sharma (32 papers)
  19. Christopher Sharp (3 papers)
  20. Rahul Thapa (16 papers)
Citations (2)