Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle (2406.09029v1)

Published 13 Jun 2024 in cs.CY
Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle

Abstract: Fairness is one of the most commonly identified ethical principles in existing AI guidelines, and the development of fair AI-enabled systems is required by new and emerging AI regulation. But most approaches to addressing the fairness of AI-enabled systems are limited in scope in two significant ways: their substantive content focuses on statistical measures of fairness, and they do not emphasize the need to identify and address fairness considerations across the whole AI lifecycle. Our contribution is to present an assurance framework and tool that can enable a practical and transparent method for widening the scope of fairness considerations across the AI lifecycle and move the discussion beyond mere statistical notions of fairness to consider a richer analysis in a practical and context-dependent manner. To illustrate this approach, we first describe and then apply the framework of Trustworthy and Ethical Assurance (TEA) to an AI-enabled clinical diagnostic support system (CDSS) whose purpose is to help clinicians predict the risk of developing hypertension in patients with Type 2 diabetes, a context in which several fairness considerations arise (e.g., discrimination against patient subgroups). This is supplemented by an open-source tool and a fairness considerations map to help facilitate reasoning about the fairness of AI-enabled systems in a participatory way. In short, by using a shared framework for identifying, documenting and justifying fairness considerations, and then using this deliberative exercise to structure an assurance case, research on AI fairness becomes reusable and generalizable for others in the ethical AI community and for sharing best practices for achieving fairness and equity in digital health and healthcare in particular.

This paper introduces a sociotechnical framework for assuring the fairness of AI-enabled systems throughout their lifecycle. The authors argue that current approaches to AI fairness are limited by their focus on statistical measures and their lack of consideration for the temporal aspect of fairness. The proposed framework, Trustworthy and Ethical Assurance (TEA), aims to address these limitations by providing a structured, argument-based approach to identifying, documenting, and justifying fairness considerations across the entire AI lifecycle.

The paper begins by highlighting the growing importance of fairness in AI, noting its presence in ethical guidelines and emerging regulations. It critiques the prevailing trend of reducing fairness to statistical measures, such as statistical parity and predictive equality, which can be easily gamed and may fail to capture the complexities of real-world fairness issues. The authors also point out the limitations of focusing solely on prediction-recipients, often neglecting other stakeholders who may be affected by AI systems. Finally, the authors identify a lack of systematic and reproducible processes for ensuring AI fairness.

The TEA framework is presented as a solution to these issues. It utilizes an argument-based assurance case methodology. This methodology involves constructing a structured argument, supported by evidence, to justify claims about the fairness of an AI-enabled system. A TEA case includes:

  • A top-level claim, such as "the AI system is fair."
  • Intermediate claims that decompose the top-level claim into more specific and operationalized statements, such as "the AI system does not discriminate against marginalized groups".
  • Evidence that grounds the argument by justifying the intermediate claims. Evidence can include quantitative data, documentation, or expert opinions.

The paper emphasizes that the TEA framework is not merely a tool for assessing fairness after a system is built, but rather a methodology for ensuring fairness throughout the AI lifecycle. To this end, the paper adopts a three-phase AI lifecycle model consisting of:

  • Project design, which includes project planning, problem formulation, data extraction or procurement, and data analysis.
  • Model development, which includes preprocessing and feature engineering, model selection and training, model testing and validation, and model documentation.
  • System deployment, which includes system implementation, user training, system use and monitoring, and model updating or deprovisioning.

To illustrate the application of the TEA framework, the authors present a case paper of an AI-enabled clinical diagnostic support system (CDSS) designed to predict the risk of hypertension in patients with Type 2 diabetes. They describe the system's data preprocessing, model training, and performance metrics, including Accuracy and Cohen's Kappa. They note that four different machine learning algorithms were used to train separate models which were then combined using a generalized linear model as an ensemble method. The feature importance levels were also calculated to make the results more explainable.

The paper then explores how the TEA framework can be applied to assure fairness across the lifecycle of the AI-enabled CDSS. The authors present a fairness considerations map, which highlights important intermediate claims to consider when developing a TEA argument to justify the fairness of an AI-enabled system. This map emphasizes that considerations of fairness should not be limited to statistical measures and should also include sociopolitical considerations. Some examples of questions to consider include:

  • In the project design phase: Was a diverse team assembled for the project? Was the provenance of the data verified? Were biases in the data identified and mitigated?
  • In the model development phase: Who was involved in cleaning the data? How were benchmarks and measures set? Is the model appropriate?
  • In the deployment phase: Was the system appropriately integrated into the existing sociotechnical practices? Was the role of the user clearly outlined and communicated? Who is responsible for monitoring the system? How were update thresholds set?

The authors argue that these questions demonstrate that fairness is not merely a technical issue. They emphasize that a reductionist approach to fairness is insufficient to capture all the relevant considerations. They advocate for a more holistic approach, considering the various stakeholders and the potential impacts of AI-enabled systems on society.

The paper concludes by reiterating the limitations of current approaches to AI fairness and highlighting the need for sociotechnical and through-life approaches. The authors advocate for the use of the TEA framework to facilitate the development of fairer AI-enabled systems. They also provide an open-source tool to help stakeholders develop their own assurance cases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Marten H. L. Kaas (2 papers)
  2. Christopher Burr (11 papers)
  3. Zoe Porter (6 papers)
  4. Philippa Ryan (7 papers)
  5. Michael Katell (13 papers)
  6. Nuala Polo (1 paper)
  7. Kalle Westerling (1 paper)
  8. Ibrahim Habli (20 papers)
  9. Berk Ozturk (2 papers)
Citations (1)