Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space (2308.09437v3)

Published 18 Aug 2023 in cs.LG, cs.AI, cs.CV, and cs.CY

Abstract: Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions. This poses risks when deploying these models for high-stake decision-making, such as in medical applications. Current methods for post-hoc model correction either require input-level annotations which are only possible for spatially localized biases, or augment the latent feature space, thereby hoping to enforce the right reasons. We present a novel method for model correction on the concept level that explicitly reduces model sensitivity towards biases via gradient penalization. When modeling biases via Concept Activation Vectors, we highlight the importance of choosing robust directions, as traditional regression-based approaches such as Support Vector Machines tend to result in diverging directions. We effectively mitigate biases in controlled and real-world settings on the ISIC, Bone Age, ImageNet and CelebA datasets using VGG, ResNet and EfficientNet architectures. Code is available on https://github.com/frederikpahde/rrclarc.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Maximilian Dreyer (15 papers)
  2. Frederik Pahde (13 papers)
  3. Christopher J. Anders (14 papers)
  4. Wojciech Samek (144 papers)
  5. Sebastian Lapuschkin (66 papers)
Citations (8)
Github Logo Streamline Icon: https://streamlinehq.com