One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations (2307.05128v1)
Abstract: One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers' output for unseen data and evaluate the method's robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case.
- E. Arulprakash and M. Aruldoss, “A study on generic object detection with emphasis on future research directions,” Journal of King Saud University-Computer and Information Sciences, 2021.
- A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
- Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, “Self-training with noisy student improves imagenet classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10 687–10 698.
- T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8110–8119.
- E. Richardson, Y. Alaluf, O. Patashnik, Y. Nitzan, Y. Azar, S. Shapiro, and D. Cohen-Or, “Encoding in style: a stylegan encoder for image-to-image translation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2287–2296.
- D. Jha, M. A. Riegler, D. Johansen, P. Halvorsen, and H. D. Johansen, “Doubleu-net: A deep convolutional neural network for medical image segmentation,” in 2020 IEEE 33rd International symposium on computer-based medical systems (CBMS). IEEE, 2020, pp. 558–564.
- V. R. Kumar, S. Yogamani, H. Rashed, G. Sitsu, C. Witt, I. Leang, S. Milz, and P. Mäder, “Omnidet: Surround view cameras based multi-task visual perception network for autonomous driving,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2830–2837, 2021.
- K. Sundararajan and D. L. Woodard, “Deep learning for biometrics: A survey,” ACM Comput. Surv., vol. 51, no. 3, may 2018. [Online]. Available: https://doi.org/10.1145/3190618
- C. Luo, X. Li, L. Wang, J. He, D. Li, and J. Zhou, “How does the data set affect cnn-based image classification performance?” in 2018 5th International Conference on Systems and Informatics (ICSAI). IEEE, 2018, pp. 361–366.
- M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International Conference on Machine Learning. PMLR, 2019, pp. 6105–6114.
- K. Hernandez-Diaz, F. Alonso-Fernandez, and J. Bigun, “Periocular recognition using cnn features off-the-shelf,” in 2018 International conference of the biometrics special interest group (BIOSIG). IEEE, 2018, pp. 1–5.
- F. Alonso-Fernandez, K. B. Raja, R. Raghavendra, C. Busch, J. Bigun, R. Vera-Rodriguez, and J. Fierrez, “Cross-sensor periocular biometrics in a global pandemic: Comparative benchmark and novel multialgorithmic approach,” Information Fusion, vol. 83, pp. 110–130, 2022.
- P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18 661–18 673, 2020.
- E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Similarity-Based Pattern Recognition: Third International Workshop, SIMBAD 2015, Copenhagen, Denmark, October 12-14, 2015. Proceedings 3. Springer, 2015, pp. 84–92.
- Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age,” in 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 2018, pp. 67–74.
- J. Royer, C. Blais, I. Charbonneau, K. Déry, J. Tardif, B. Duchaine, F. Gosselin, and D. Fiset, “Greater reliance on the eye region predicts better face recognition ability,” Cognition, vol. 181, pp. 12–20, 2018.
- F. Alonso-Fernandez, J. Bigun, J. Fierrez, N. Damer, H. Proença, and A. Ross, “Periocular biometrics: A modality for unconstrained scenarios,” arXiv preprint arXiv:2212.13792, 2022.
- U. Park, A. Ross, and A. K. Jain, “Periocular biometrics in the visible spectrum: A feasibility study,” in 2009 IEEE 3rd international conference on biometrics: theory, applications, and systems. IEEE, 2009, pp. 1–6.
- K. Nguyen, H. Proença, and F. Alonso-Fernandez, “Deep learning for iris recognition: A survey,” arXiv preprint arXiv:2210.05866, 2022.
- F. Alonso-Fernandez and J. Bigun, “A survey on periocular biometrics research,” Pattern Recognition Letters, vol. 82, pp. 92–105, 2016.
- V. Talreja, N. M. Nasrabadi, and M. C. Valenti, “Attribute-based deep periocular recognition: leveraging soft biometrics to improve periocular recognition,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 4041–4050.
- F. Alonso-Fernandez, K. Hernandez-Diaz, S. Ramis, F. J. Perales, and J. Bigun, “Facial masks and soft-biometrics: Leveraging face recognition cnns for age and gender prediction on mobile ocular images,” IET Biometrics, vol. 10, no. 5, pp. 562–580, 2021.
- K. K. Kamarajugadda and T. R. Polipalli, “Extract features from periocular region to identify the age using machine learning algorithms,” Journal of medical systems, vol. 43, pp. 1–15, 2019.
- A. Rattani, N. Reddy, and R. Derakhshani, “Convolutional neural networks for gender prediction from smartphone-based ocular images,” Iet Biometrics, vol. 7, no. 5, pp. 423–430, 2018.
- S. Khellat-Kihel, J. Muhammad, Z. Sun, and M. Tistarelli, “Gender and ethnicity recognition based on visual attention-driven deep architectures,” Journal of Visual Communication and Image Representation, vol. 88, p. 103627, 2022.
- L. A. Zanlorensi, R. Laroca, E. Luz, A. S. Britto, L. S. Oliveira, and D. Menotti, “Ocular recognition databases and competitions: a survey,” Artificial Intelligence Review, vol. 55, no. 1, pp. 129–180, 2022.
- R. Sharma and A. Ross, “Periocular biometrics and its relevance to partially masked faces: A survey,” Computer Vision and Image Understanding, vol. 226, p. 103583, 2023.
- K. Hernandez-Diaz, F. Alonso-Fernandez, and J. Bigun, “Cross spectral periocular matching using resnet features,” in 2019 International Conference on Biometrics (ICB). IEEE, 2019, pp. 1–7.
- K. Nguyen, C. Fookes, A. Ross, and S. Sridharan, “Iris recognition with off-the-shelf cnn features: A deep learning perspective,” IEEE Access, vol. 6, pp. 18 848–18 855, 2017.
- E. Luz, G. Moreira, L. A. Z. Junior, and D. Menotti, “Deep periocular representation aiming video surveillance,” Pattern Recognition Letters, vol. 114, pp. 2–12, 2018.
- L. A. Zanlorensi, E. Luz, R. Laroca, A. S. Britto, L. S. Oliveira, and D. Menotti, “The impact of preprocessing on deep representations for iris recognition on unconstrained environments,” in 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI). IEEE, 2018, pp. 289–296.
- S. Banerjee and A. Ross, “One-shot representational learning for joint biometric and device authentication,” in 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021, pp. 5988–5995.
- N. Reddy, A. Rattani, and R. Derakhshani, “Generalizable deep features for ocular biometrics,” Image and Vision Computing, vol. 103, p. 103996, 2020.
- ——, “Robust subject-invariant feature learning for ocular biometrics in visible spectrum,” in 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 2019, pp. 1–9.
- L. A. Zanlorensi, D. R. Lucio, A. d. S. Britto Junior, H. Proença, and D. Menotti, “Deep representations for cross-spectral ocular biometrics,” IET Biometrics, vol. 9, no. 2, pp. 68–77, 2020.
- A. Sharma, S. Verma, M. Vatsa, and R. Singh, “On cross spectral periocular recognition,” in 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014, pp. 5007–5011.
- C. N. Padole and H. Proenca, “Periocular recognition: Analysis of performance degradation factors,” in 2012 5th IAPR international conference on biometrics (ICB). IEEE, 2012, pp. 439–445.
- A. Sequeira, L. Chen, P. Wild, J. Ferryman, F. Alonso-Fernandez, K. B. Raja, R. Raghavendra, C. Busch, and J. Bigun, “Cross-eyed-cross-spectral iris/periocular recognition database and competition,” in 2016 International Conference of the Biometrics Special Interest Group (BIOSIG). IEEE, 2016, pp. 1–5.
- A. F. Sequeira, L. Chen, J. Ferryman, P. Wild, F. Alonso-Fernandez, J. Bigun, K. B. Raja, R. Raghavendra, C. Busch, T. de Freitas Pereira et al., “Cross-eyed 2017: Cross-spectral iris/periocular recognition competition,” in 2017 IEEE International Joint Conference on Biometrics (IJCB). IEEE, 2017, pp. 725–732.
- P. R. Nalla and A. Kumar, “Toward more accurate iris recognition using cross-spectral matching,” IEEE transactions on Image processing, vol. 26, no. 1, pp. 208–221, 2016.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations, 2015.
- F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
- M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
- N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. Ieee, 2005, pp. 886–893.
- T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 7, pp. 971–987, 2002.
- D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, pp. 91–110, 2004.
- A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off-the-shelf: an astounding baseline for recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 806–813.
- F. Alonso-Fernandez, J. Barrachina, K. Hernandez-Diaz, and J. Bigun, “Squeezefaceposenet: Lightweight face verification across different poses for mobile platforms,” in Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII. Springer, 2021, pp. 139–153.
- R. Vyas, “Enhanced near-infrared periocular recognition through collaborative rendering of hand crafted and deep features,” Multimedia Tools and Applications, vol. 81, no. 7, pp. 9351–9365, 2022.