Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fairness and bias correction in machine learning for depression prediction: results from four study populations (2211.05321v3)

Published 10 Nov 2022 in cs.LG and cs.CY

Abstract: A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations. Inequalities are reflected in the data collected for scientific purposes. When not properly accounted for, ML models leart from data can reinforce these structural inequalities or biases. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions. Finally, we provide practical recommendations to develop bias-aware ML models for depression risk prediction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Vien Ngoc Dang (2 papers)
  2. Anna Cascarano (1 paper)
  3. Rosa H. Mulder (1 paper)
  4. Charlotte Cecil (1 paper)
  5. Maria A. Zuluaga (31 papers)
  6. Jerónimo Hernández-González (2 papers)
  7. Karim Lekadir (37 papers)