Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition (2207.10299v2)

Published 21 Jul 2022 in cs.CV

Abstract: Noisy label Facial Expression Recognition (FER) is more challenging than traditional noisy label classification tasks due to the inter-class similarity and the annotation ambiguity. Recent works mainly tackle this problem by filtering out large-loss samples. In this paper, we explore dealing with noisy labels from a new feature-learning perspective. We find that FER models remember noisy samples by focusing on a part of the features that can be considered related to the noisy labels instead of learning from the whole features that lead to the latent truth. Inspired by that, we propose a novel Erasing Attention Consistency (EAC) method to suppress the noisy samples during the training process automatically. Specifically, we first utilize the flip semantic consistency of facial images to design an imbalanced framework. We then randomly erase input images and use flip attention consistency to prevent the model from focusing on a part of the features. EAC significantly outperforms state-of-the-art noisy label FER methods and generalizes well to other tasks with a large number of classes like CIFAR100 and Tiny-ImageNet. The code is available at https://github.com/zyh-uaiaaaa/Erasing-Attention-Consistency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuhang Zhang (64 papers)
  2. Chengrui Wang (11 papers)
  3. Xu Ling (3 papers)
  4. Weihong Deng (71 papers)
Citations (118)

Summary

Overview of "Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition"

The paper "Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition" presents a novel approach to tackle the challenge of noisy label learning in the domain of Facial Expression Recognition (FER). The authors propose the Erasing Attention Consistency (EAC) method, which leverages feature-learning strategies to mitigate the impact of label noise without the need for explicit noise rate estimation.

In contrast to conventional methods like sample selection and label ensembling, which rely on identifying and suppressing noisy samples based on loss values, EAC targets the feature-learning phase. The paper argues that existing FER models tend to memorize noisy samples by focusing on partial features indicative of noisy labels, thereby overlooking the complete feature set that corresponds to the true labels. By addressing this issue, EAC seeks to enhance the model’s robustness in the presence of label noise.

The cornerstone of the EAC method is the integration of attention consistency, particularly through the use of flip semantic consistency. Here, the paper employs a strategy that combines random erasing of input images with attention consistency between original and flipped images. This approach encourages the model to consider comprehensive feature sets across all training samples, effectively discouraging overfitting to noisy samples.

EAC operates under an imbalanced framework where classification loss is computed on the original images, while consistency loss is enforced between attention maps of original and flipped images. The dynamic nature of random erasing ensures that models cannot simply memorize features for small consistency losses, compelling them to incorporate the entire feature set.

The authors claim that EAC significantly outperforms state-of-the-art techniques in noisy label FER on datasets such as RAF-DB, FERPlus, and AffectNet. The method also demonstrates superior generalization capabilities on datasets with a large number of classes, like CIFAR100 and Tiny-ImageNet, underscoring its versatility beyond FER.

Key Contributions

  1. Feature-Learning Perspective: The paper shifts focus from conventional sample selection to feature-learning in the field of noisy labels, obviating the necessity for noise rate information.
  2. Erasing Attention Consistency: EAC introduces a novel approach that automatically mitigates the memorization of noisy labels by enforcing an imbalanced framework utilizing flip attention consistency.
  3. Extensive Evaluation: The authors highlight EAC’s efficacy through rigorous testing across various levels of noise on multiple FER benchmarks, along with its successful application to broader classification tasks with many classes.

Implications and Future Directions

Practically, the EAC method is poised to improve the robustness and reliability of FER systems in real-world applications, where noisy labels are inevitable. Furthermore, its applicability to large-scale datasets beyond FER suggests potential for broader adoption in other computer vision tasks impacted by label noise.

Theoretically, this work presents a compelling argument for reconsidering the focus of noisy label handling strategies, advocating for attention mechanisms and feature learning as fundamental components. The use of attention consistency in optimizing model training under noise is an essential insight that could inspire future research in optimizing and refining other noise-resistant learning algorithms.

Future developments in AI could build upon these findings by expanding on the types of transformations used for consistency checks or improving the computational efficiency of attention consistency mechanisms. Additionally, exploring how these methods scale with deeper, more complex models or adapt to unsupervised and semi-supervised learning scenarios may chart exciting paths for further inquiry.

This paper establishes a novel understanding of how to harness feature learning to improve performance amidst noisy data, thereby contributing significantly to the ongoing discourse in robust machine learning techniques.