Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Masked Face Recognition Method during the COVID-19 Pandemic (2105.03026v2)

Published 7 May 2021 in cs.CV

Abstract: The coronavirus disease (COVID-19) is an unparalleled crisis leading to a huge number of casualties and security problems. In order to reduce the spread of coronavirus, people often wear masks to protect themselves. This makes face recognition a very difficult task since certain parts of the face are hidden. A primary focus of researchers during the ongoing coronavirus pandemic is to come up with suggestions to handle this problem through rapid and efficient solutions. In this paper, we propose a reliable method based on occlusion removal and deep learning-based features in order to address the problem of the masked face recognition process. The first step is to remove the masked face region. Next, we apply three pre-trained deep Convolutional Neural Networks (CNN) namely, VGG-16, AlexNet, and ResNet-50, and use them to extract deep features from the obtained regions (mostly eyes and forehead regions). The Bag-of-features paradigm is then applied to the feature maps of the last convolutional layer in order to quantize them and to get a slight representation comparing to the fully connected layer of classical CNN. Finally, Multilayer Perceptron (MLP) is applied for the classification process. Experimental results on Real-World-Masked-Face-Dataset show high recognition performance compared to other state-of-the-art methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Dlib library. http://dlib.net/. [Accessed 02-January-2021].
  2. S. Almabdy and L. Elrefaei. Deep convolutional neural network-based approaches for face recognition. Applied Sciences, 9(20):4397, 2019.
  3. 3-d face recognition under occlusion using masked projection. IEEE Transactions on Information Forensics and Security, 8(5):789–802, 2013.
  4. Robust 3d face recognition in presence of pose and partial occlusions or missing parts. arXiv preprint arXiv:1408.3709, 2014.
  5. A novel gan-based network for unmasking of masked face. IEEE Access, 8:44276–44287, 2020.
  6. 3d face recognition under expressions, occlusions, and pose variations. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(9):2270–2283, 2013.
  7. Topology preserving structural matching for automatic partial face recognition. IEEE Transactions on Information Forensics and Security, 13(7):1823–1837, 2018.
  8. 3d face recognition using geodesic facial curves to handle expression, occlusion and pose variations. International Journal of Computer Science and Information Technologies, 5(3):4284–4287, 2014.
  9. Deep and shallow covariance feature quantization for 3d facial expression recognition. arXiv preprint arXiv:2105.05708, 2021.
  10. 3d face recognition using covariance based descriptors. Pattern Recognition Letters, 78:1–7, 2016.
  11. Geometrical and visual feature quantization for 3d face recognition. In International Conference on Computer Vision Theory and Applications, volume 6, pages 187–193. SCITEPRESS, 2017.
  12. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  13. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. 2008.
  14. A face recognition application for alzheimer’s patients using esp32-cam and raspberry pi. Journal of Real-Time Image Processing, 20(5):100, 2023.
  15. D. E. King. Dlib-ml: A machine learning toolkit. The Journal of Machine Learning Research, 10:1755–1758, 2009.
  16. A prescreener for 3d face recognition using radial symmetry and the hausdorff fraction. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops, pages 168–168. IEEE, 2005.
  17. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  18. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
  19. Joint dictionary and classifier learning for categorization of images using a max-margin framework. In Pacific-Rim Symposium on Image and Video Technology, pages 87–98. Springer, 2013.
  20. S. Loussaief and A. Abdelkrim. Deep learning vs. bag of features in machine learning for image classification. In 2018 International Conference on Advanced Systems and Electric Technologies (IC_ASET), pages 6–10. IEEE, 2018.
  21. Matching 2.5 d face scans to 3d models. IEEE transactions on pattern analysis and machine intelligence, 28(1):31–43, 2005.
  22. A deep transfer learning approach to fine-tuning facial recognition models. In 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), pages 2671–2676. IEEE, 2018.
  23. A. M. Martínez. Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class. IEEE Transactions on Pattern analysis and machine intelligence, 24(6):748–763, 2002.
  24. Largest matching areas for illumination and occlusion robust face recognition. IEEE transactions on cybernetics, 47(3):796–808, 2016.
  25. An improved multi-scale face detection using convolutional neural network. Signal Image and Video Processing, 2020.
  26. N. Passalis and A. Tefas. Learning bag-of-features pooling for deep convolutional neural networks. In Proceedings of the IEEE international conference on computer vision, pages 5755–5763, 2017.
  27. A training-free nose tip detection method from face range images. Pattern Recognition, 44(3):544–558, 2011.
  28. The feret database and evaluation procedure for face-recognition algorithms. Image and vision computing, 16(5):295–306, 1998.
  29. Occlusion invariant face recognition using mean based weight matrix and support vector machine. Sadhana, 39(2):303–315, 2014.
  30. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  31. Occlusion robust face recognition based on mask learning with pairwise differential siamese network. In Proceedings of the IEEE International Conference on Computer Vision, pages 773–782, 2019.
  32. Masked face recognition dataset and application. arXiv preprint arXiv:2003.09093, 2020.
  33. Robust point set matching for partial face recognition. IEEE transactions on image processing, 25(3):1163–1176, 2016.
  34. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1-3):37–52, 1987.
  35. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
Citations (163)

Summary

  • The paper introduces an efficient masked face recognition method that removes occluded regions and extracts deep features from visible areas like eyes and forehead using pre-trained CNNs.
  • Instead of traditional fully connected layers, the method employs a Bag-of-Features paradigm on CNN feature maps for compact representation and classification via a Multilayer Perceptron.
  • Experiments on the Real-World-Masked-Face-Dataset demonstrate high accuracy, especially with VGG-16 and ResNet-50, offering a practical solution for applications like public security.

Efficient Masked Face Recognition Method during the COVID-19 Pandemic

Masked face recognition has emerged as an essential challenge in biometric systems due to the COVID-19 pandemic. This paper presents a novel approach to address the difficulties posed by occluded faces resulting from mask-wearing practices. The method integrates occlusion removal strategies and deep learning techniques to enhance recognition accuracy under these circumstances.

The paper introduces an efficient recognition system that processes masked faces by eliminating the occluded regions and focusing on the extraction of features from visible regions, predominantly the eyes and forehead. Utilizing pre-trained deep Convolutional Neural Networks (CNNs) such as VGG-16, AlexNet, and ResNet-50, deep features are extracted. These networks, renowned for their ability to handle various image-based tasks, contribute to feature robustness amidst occlusion challenges.

Rather than relying on the traditional fully connected layers, which often involve extensive computational resources, the method employs a Bag-of-Features (BoF) paradigm applied to the feature maps derived from the last convolutional layers of CNNs. This approach provides a compact representation adaptable for real-time processing, significantly reducing the system's computational overhead.

The classification of masked facial images is conducted using a Multilayer Perceptron (MLP), a simple yet effective neural network structure that offers competitive performance by leveraging the quantized features processed by the BoF framework. Experimental results on the Real-World-Masked-Face-Dataset highlight the system's proficiency, demonstrating improved recognition rates compared to conventional face recognition techniques that struggle with masked data.

Key findings from the experiments indicate that the proposed method achieves high accuracy rates, with VGG-16 and ResNet-50 consistently outperforming other architectures across various settings within the dataset. Notably, the approach's adaptability to different configurations ensures comprehensive coverage of masked face scenarios.

This work underscores the non-negligible impact facial occlusion has on recognition systems and offers a practical solution scalable to diverse applications, including public security and access control systems during pandemics. The implications extend beyond immediate practical applications to potential developments in facial recognition systems, particularly concerning partial face recognition tasks involving predictable occlusions.

Future research may explore further enhancements by incorporating ensemble methods or additional pre-trained architectures to elevate performance metrics. Additionally, extending the methodology to support continuous surveillance systems and improving the robustness against diverse mask types and positions remains a promising avenue for investigation.