Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 129 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Self-Correction for Human Parsing (1910.09777v1)

Published 22 Oct 2019 in cs.CV, cs.LG, and eess.IV

Abstract: Labeling pixel-level masks for fine-grained semantic segmentation tasks, e.g. human parsing, remains a challenging task. The ambiguous boundary between different semantic parts and those categories with similar appearance usually are confusing, leading to unexpected noises in ground truth masks. To tackle the problem of learning with label noises, this work introduces a purification strategy, called Self-Correction for Human Parsing (SCHP), to progressively promote the reliability of the supervised labels as well as the learned models. In particular, starting from a model trained with inaccurate annotations as initialization, we design a cyclically learning scheduler to infer more reliable pseudo-masks by iteratively aggregating the current learned model with the former optimal one in an online manner. Besides, those correspondingly corrected labels can in turn to further boost the model performance. In this way, the models and the labels will reciprocally become more robust and accurate during the self-correction learning cycles. Benefiting from the superiority of SCHP, we achieve the best performance on two popular single-person human parsing benchmarks, including LIP and Pascal-Person-Part datasets. Our overall system ranks 1st in CVPR2019 LIP Challenge. Code is available at https://github.com/PeikeLi/Self-Correction-Human-Parsing.

Citations (296)

Summary

  • The paper presents SCHP, a self-correction method that cyclically refines both labels and models to enhance human parsing accuracy.
  • It employs a cyclic learning scheduler and an augmented A-CE2P architecture, achieving a 6.2-point mIoU improvement on the LIP benchmark.
  • The approach offers robust performance without extra computational cost, paving the way for applications in multi-person and video parsing.

Self-Correction for Human Parsing: An Examination

The paper "Self-Correction for Human Parsing" by Peike Li et al. addresses the issue of label noise in fine-grained semantic segmentation tasks, specifically focusing on human parsing. Human parsing, as elaborated in this paper, involves categorizing each pixel of an image distinguished by parts of the human body, such as arms, legs, and clothing, a task that is critical for applications ranging from image editing to virtual reality. This work proposes a novel self-correction strategy called Self-Correction for Human Parsing (SCHP), which enhances the accuracy of both supervised labels and trained models by refining noisy annotations during training.

Core Contributions

The SCHP strategy fundamentally revolves around a cyclic learning scheduler that iteratively refines the labels and the model. The process begins with a preliminary model trained on potentially inaccurate data, and through successive cycles, the model is refined by aggregating current and past optimal models to generate more reliable pseudo-labels. These refined labels reciprocally enhance the model's learning, thus forming a robust, self-correcting loop. Importantly, this process does not require additional computational overhead, as it is integrated into the existing training schedule.

The authors employ Augmented Context Embedding with Edge Perceiving (A-CE2P), a network architecture that extends the CE2P framework by incorporating both boundary and parsing information to improve the resolution of ambiguous boundaries between semantic parts. Despite the claims of novelty not being centered around this architecture, its augmentations with strategies for mutual model and label promotion merit attention in terms of enhancing performance robustness and accuracy.

Numerical Results and Performance Evaluation

When evaluated on prominent single-person parsing benchmarks, such as the LIP and Pascal-Person-Part datasets, SCHP demonstrates superior performance. The strategy achieves a remarkable mean Intersection over Union (mIoU) score improvement on the LIP benchmark, surpassing the next best by 6.2 points. Additionally, the overall system secured the top rank in the CVPR2019 LIP Challenge, highlighting the method's efficacy in practical scenarios. On the Pascal-Person-Part dataset, the SCHP also outperformed existing methodologies, indicating its broad adaptability and effectiveness across various datasets.

Implications and Future Directions

Practically, the self-correction method fosters more reliable human parsing models capable of resisting the typical inaccuracies found in annotations due to challenging visual cues or human oversight. Theoretically, this work proposes a significant shift towards dynamic, interactive training methodologies that refine not only the model but also the ground truth it relies upon.

Moving forward, the authors suggest potential extensions of this method to multi-person or video parsing tasks, where the inherent complexities and interconnections pose further opportunities for improved self-correction methodologies. These developments can significantly aid in evolving AI's understanding in dynamic, multi-agent environments.

The paper offers an insightful contribution to the field of computer vision, particularly in tackling label noise challenges through a mutually reinforcing model and label refinement process. This work stands as an indispensable reference point for researchers and practitioners aiming to enhance model reliability in tasks requiring detailed semantic segmentation.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.