W2WNet: a two-module probabilistic Convolutional Neural Network with embedded data cleansing functionality
Abstract: Convolutional Neural Networks (CNNs) are supposed to be fed with only high-quality annotated datasets. Nonetheless, in many real-world scenarios, such high quality is very hard to obtain, and datasets may be affected by any sort of image degradation and mislabelling issues. This negatively impacts the performance of standard CNNs, both during the training and the inference phase. To address this issue we propose Wise2WipedNet (W2WNet), a new two-module Convolutional Neural Network, where a Wise module exploits Bayesian inference to identify and discard spurious images during the training, and a Wiped module takes care of the final classification while broadcasting information on the prediction confidence at inference time. The goodness of our solution is demonstrated on a number of public benchmarks addressing different image classification tasks, as well as on a real-world case study on histological image analysis. Overall, our experiments demonstrate that W2WNet is able to identify image degradation and mislabelling issues both at training and at inference time, with a positive impact on the final classification accuracy.
- Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- Effects of degradations on deep neural network architectures. arXiv preprint arXiv:1807.10108, 2018.
- Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
- Understanding how image quality affects deep neural networks. In 2016 eighth international conference on quality of multimedia experience (QoMEX), pages 1–6. IEEE, 2016.
- Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. Medical Image Analysis, 65:101759, 2020.
- Self-error-correcting convolutional neural network for learning with noisy labels. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pages 111–117. IEEE, 2017.
- Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015.
- F. Chollet. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1800–1807, 2017.
- Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852, 2017.
- Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2691–2699, 2015.
- Very low resolution face recognition problem. IEEE Transactions on image processing, 21(1):327–340, 2011.
- Coupled kernel embedding for low-resolution face image recognition. IEEE Transactions on Image Processing, 21(8):3770–3783, 2012.
- Atoms of recognition in human and computer vision. Proceedings of the National Academy of Sciences, 113(10):2744–2749, 2016.
- Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
- B. Frenay and M. Verleysen. Classification in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5):845–869, 2014.
- Uncertainty based detection and relabeling of noisy image labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 33–37, 2019.
- Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402–6413, 2017.
- Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059, 2016.
- A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
- Uncertainty quantification using bayesian neural networks in classification: Application to biomedical image segmentation. Computational Statistics & Data Analysis, 142:106816, 2020.
- David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992.
- Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
- Alex Graves. Practical variational inference for neural networks. In Advances in neural information processing systems, pages 2348–2356, 2011.
- Structured and efficient variational deep learning with matrix gaussian posteriors. In International Conference on Machine Learning, pages 1708–1716, 2016.
- Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014.
- Ara: accurate, reliable and active histopathological image classification framework with bayesian deep learning. Scientific reports, 9(1):1–12, 2019.
- Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
- Exploiting “uncertain” deep networks for data cleaning in digital pathology. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pages 1139–1143, 2020.
- A comprehensive guide to bayesian convolutional neural network with variational inference. arXiv preprint arXiv:1901.02731, 2019.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Learning multiple layers of features from tiny images. 2009.
- Emnist: Extending mnist to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2921–2926. IEEE, 2017.
- Dealing with lack of training data for convolutional neural networks: The case of digital pathology. Electronics, 8(3):256, 2019.
- Whole slide imaging in pathology: advantages, limitations, and emerging perspectives. Pathol Lab Med Int, 7(23-33):4321, 2015.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.