Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation (2405.20531v2)
Abstract: Labeling errors in datasets are common, arising in a variety of contexts, such as human labeling, noisy labeling, and weak labeling (i.e., image classification). Although neural networks (NNs) can tolerate modest amounts of these errors, their performance degrades substantially once error levels exceed a certain threshold. We propose a new loss reweighting, architecture-independent methodology, Rockafellian Relaxation Method (RRM) for neural network training. Experiments indicate RRM can enhance neural network methods to achieve robust performance across classification tasks in computer vision and natural language processing (sentiment analysis). We find that RRM can mitigate the effects of dataset contamination stemming from both (heavy) labeling error and/or adversarial perturbation, demonstrating effectiveness across a variety of data domains and machine learning tasks.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.