2000 character limit reached
Deep k-NN for Noisy Labels (2004.12289v1)
Published 26 Apr 2020 in cs.LG and stat.ML
Abstract: Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than many recently proposed methods. We also provide new statistical guarantees into its efficacy.