Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Informative Dropout for Robust Representation Learning: A Shape-bias Perspective (2008.04254v1)

Published 10 Aug 2020 in cs.LG, cs.CV, and stat.ML

Abstract: Convolutional Neural Networks (CNNs) are known to rely more on local texture rather than global shape when making decisions. Recent work also indicates a close relationship between CNN's texture-bias and its robustness against distribution shift, adversarial perturbation, random corruption, etc. In this work, we attempt at improving various kinds of robustness universally by alleviating CNN's texture bias. With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias. Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture. Through extensive experiments, we observe enhanced robustness under various scenarios (domain generalization, few-shot classification, image corruption, and adversarial perturbation). To the best of our knowledge, this work is one of the earliest attempts to improve different kinds of robustness in a unified model, shedding new light on the relationship between shape-bias and robustness, also on new approaches to trustworthy machine learning algorithms. Code is available at https://github.com/bfshi/InfoDrop.

Citations (104)

Summary

  • The paper introduces Informative Dropout (InfoDrop) to reduce texture bias and improve CNN robustness across varied scenarios.
  • It employs a dropout-like algorithm that evaluates self-information to suppress redundant texture features, thus emphasizing advantageous shape cues.
  • Experiments demonstrate significant gains in domain generalization and adversarial robustness, reinforcing the method’s practical utility.

Informative Dropout for Robust Representation Learning: A Shape-bias Perspective

The paper "Informative Dropout for Robust Representation Learning: A Shape-bias Perspective" offers a novel methodology to enhance robustness in convolutional neural networks (CNNs) by addressing their intrinsic texture bias. CNNs have demonstrated high proficiency in various visual tasks, but they show susceptibility to distribution shifts, adversarial perturbations, and random image corruptions. A key factor contributing to this vulnerability is the model's reliance on local texture rather than global shape features. To mitigate this bias, the authors introduce Informative Dropout (InfoDrop), a lightweight and model-agnostic approach aimed at improving both interpretability and robustness across diverse scenarios.

The methodology draws inspiration from the human visual system, which exhibits a bias toward shapes and relies more heavily on regions with higher self-information. InfoDrop mirrors this behavior by distinguishing texture from shape using a Dropout-like algorithm. It evaluates the self-information of localized regions and preferentially disassociates the CNN's output from regions that repeat similar patterns, which are indicative of textures. In essence, InfoDrop systematically reduces the CNN's reliance on texture-centric features by selectively zeroing out the output neurons corresponding to low-information input regions.

Key experiments demonstrate InfoDrop's effectiveness across various settings, including domain generalization, few-shot classification, robustness against image corruption, and adversarial robustness. Noteworthy improvements were observed, such as a marked increase in accuracy in domain generalization tasks when sketch-like images are part of the datasets, indicating InfoDrop's efficacy in emphasizing shape features. Additionally, InfoDrop enhances the CNN's resilience to perturbations, particularly when combined with adversarial training, offering an integrated approach to increasing model robustness.

Theoretical implications highlight the paper's contribution to understanding the role of texture vs. shape bias in neural network robustness. The proposed method suggests a direct relationship between a model's ability to generalize across domains and its texture independence. This understanding can steer future research towards shape-oriented learning strategies, contributing to the development of more trustworthy and robust machine learning algorithms.

From a practical perspective, the method is versatile and can be applied to any CNN architecture without significant overhead, making it a valuable addition to existing models. Future prospects outlined in the research suggest exploring the balance between texture and shape to achieve an optimal bias level for different tasks. This invites further investigation into how the interaction between these two elements can be harnessed to develop models that mimic more closely the visual processing mechanisms in human cognition.

Overall, this paper provides insightful advancements in tackling texture bias in CNNs, delivering strategic improvements in robustness and interpretability, and informing future avenues for research in reliable and secure machine learning.

Youtube Logo Streamline Icon: https://streamlinehq.com