Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hidden in Plain Sight: Undetectable Adversarial Bias Attacks on Vulnerable Patient Populations (2402.05713v3)

Published 8 Feb 2024 in cs.LG, cs.AI, and cs.CV

Abstract: The proliferation of AI in radiology has shed light on the risk of deep learning (DL) models exacerbating clinical biases towards vulnerable patient populations. While prior literature has focused on quantifying biases exhibited by trained DL models, demographically targeted adversarial bias attacks on DL models and its implication in the clinical environment remains an underexplored field of research in medical imaging. In this work, we demonstrate that demographically targeted label poisoning attacks can introduce undetectable underdiagnosis bias in DL models. Our results across multiple performance metrics and demographic groups like sex, age, and their intersectional subgroups show that adversarial bias attacks demonstrate high-selectivity for bias in the targeted group by degrading group model performance without impacting overall model performance. Furthermore, our results indicate that adversarial bias attacks result in biased DL models that propagate prediction bias even when evaluated with external datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pranav Kulkarni (13 papers)
  2. Andrew Chan (8 papers)
  3. Nithya Navarathna (1 paper)
  4. Skylar Chan (2 papers)
  5. Paul H. Yi (16 papers)
  6. Vishwa S. Parekh (25 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets