Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Low-Resolution Face Recognition via Adaptable Instance-Relation Distillation (2409.02049v1)

Published 3 Sep 2024 in cs.CV, cs.AI, cs.LG, and cs.MM

Abstract: Low-resolution face recognition is a challenging task due to the missing of informative details. Recent approaches based on knowledge distillation have proven that high-resolution clues can well guide low-resolution face recognition via proper knowledge transfer. However, due to the distribution difference between training and testing faces, the learned models often suffer from poor adaptability. To address that, we split the knowledge transfer process into distillation and adaptation steps, and propose an adaptable instance-relation distillation approach to facilitate low-resolution face recognition. In the approach, the student distills knowledge from high-resolution teacher in both instance level and relation level, providing sufficient cross-resolution knowledge transfer. Then, the learned student can be adaptable to recognize low-resolution faces with adaptive batch normalization in inference. In this manner, the capability of recovering missing details of familiar low-resolution faces can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. J. Deng, J. Guo, N. Xue et al., “Arcface: Additive angular margin loss for deep face recognition,” in CVPR, 2019, pp. 4690–4699.
  2. G. B. Huang, M. Ramesh, T. Berg et al., “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep., 2007.
  3. K. Grm, W. J. Scheirer, and V. Štruc, “Face hallucination using cascaded super-resolution and identity priors,” IEEE TIP, vol. 29, pp. 2150–2165, 2019.
  4. C. Kuo, Y.-T. Tsai, H.-H. Shuai et al., “Towards understanding cross resolution feature matching for surveillance face recognition,” in ACM MM, 2022, pp. 6706–6716.
  5. J. C. L. Chai, T.-S. Ng, C.-Y. Low et al., “Recognizability embedding enhancement for very low-resolution face recognition and quality estimation,” in CVPR, 2023, pp. 9957–9967.
  6. J. Zha and H. Chao, “Tcn: Transferable coupled network for cross-resolution face recognition,” in ICASSP, 2019, pp. 3302–3306.
  7. H. Wang, S. Wang, and L. Fang, “Two-stage multi-scale resolution-adaptive network for low-resolution face recognition,” in ACM MM, 2022, pp. 4053–4062.
  8. B. Zhao, Q. Cui, R. Song et al., “Decoupled knowledge distllation,” in CVPR, 2022, pp. 11 943–11 952.
  9. I. Kemelmacher Shlizerman, S. M. Seitz, D. Miller et al., “Relational knowledge distillation,” in CVPR, 2019, pp. 3967–3976.
  10. J. Liu, H. Qin, Y. Wu et al., “Coupleface: Relation matters for face recognition distillation,” in ECCV, 2022, pp. 683–700.
  11. Y. Tian, D. Krishnan, and P. Isola, “Contrastive representation distillation,” in ICLR, 2020. [Online]. Available: https://openreview.net/forum?id=SkgpBJrtvS
  12. J. Zhu, S. Tang, D. Chen et al., “Complementary relation contrastive distillation,” in CVPR, 2021, pp. 9260–9269.
  13. H. Wang and S. Wang, “Low-resolution face recognition enhanced by high-resolution facial images,” in FG, 2023, pp. 1–8.
  14. K. C. Chan, X. Wang, X. Xu et al., “Glean: Generative latent bank for large-factor image super-resolution,” in CVPR, 2021, pp. 14 240–14 249.
  15. E. Zangeneh, M. Rahmati, and Y. Mohsenzadeh, “Low resolution face recognition using a two-branch deep convolutional neural network architecture,” Expert Systems with Applications, vol. 139, p. 112854, 2020.
  16. G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in NeurIPS Workshop, 2015. [Online]. Available: http://arxiv.org/abs/1503.02531
  17. S. W. Kim and H.-E. Kim, “Transferring knowledge to smaller network with class-distance loss,” in ICLRW, 2017. [Online]. Available: https://openreview.net/forum?id=ByXrfaGFe
  18. S. I. Mirzadeh, M. Farajtabar, A. Li et al., “Improved knowledge distillation via teacher assistant,” in AAAI, vol. 34, no. 04, 2020, pp. 5191–5198.
  19. A. Romero, N. Ballas, S. E. Kahou et al., “Fitnets: Hints for thin deep nets,” in ICLR, 2015. [Online]. Available: http://arxiv.org/abs/1412.6550
  20. B. Heo, J. Kim, S. Yun et al., “A comprehensive overhaul of feature distillation,” in ICCV, 2019, pp. 1921–1930.
  21. D. Chen, J.-P. Mei, Y. Zhang et al., “Cross-layer distillation with semantic calibration,” in AAAI, vol. 35, no. 8, 2021, pp. 7028–7036.
  22. F. Tung and G. Mori, “Similarity-preserving knowledge distillation,” in ICCV, 2019, pp. 1365–1374.
  23. B. Peng, X. Jin, J. Liu et al., “Correlation congruence for knowledge distillation,” in ICCV, 2019, pp. 5007–5016.
  24. Y. Li, N. Wang, J. Shi et al., “Revisiting batch normalization for practical domain adaptation,” in ICLR, 2017. [Online]. Available: https://openreview.net/forum?id=Hk6dkJQFx
  25. M. Klingner, J.-A. Termöhlen, J. Ritterbach et al., “Unsupervised batchnorm adaptation (ubna): A domain adaptation method for semantic segmentation without using source domain representations,” in WACV, 2022, pp. 210–220.
  26. S. Niu, J. Wu, Y. Zhang et al., “Towards stable test-time adaptation in dynamic wild world,” in ICLR, 2023. [Online]. Available: https://openreview.net/pdf?id=g2YraF75Tj
  27. A. Shrivastava, A. Gupta, and R. B. Girshick, “Training region-based object detectors with online hard example mining,” in CVPR, 2016, pp. 761–769.
  28. T. Chen, S. Kornblith, M. Norouzi et al., “A simple framework for contrastive learning of visual representations,” in ICML, 2020, pp. 1597–1607.
  29. D. Yi, Z. Lei, S. Liao et al., “Learning face representation from scratch,” arXiv:1411.7923, 2014.
  30. S. Moschoglou, A. Papaioannou, C. Sagonas et al., “Agedb: the first manually collected, in-the-wild age database,” in CVPRW, 2017, pp. 51–59.
  31. M. Günther, P. Hu, C. Herrmann et al., “Unconstrained face detection and open-set face recognition challenge,” in IJCB, 2017, pp. 697–706.
  32. F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in CVPR, 2015, pp. 815–823.
  33. H. Wang, Y. Wang, Z. Zhou et al., “Cosface: Large margin cosine loss for deep face recognition,” in CVPR, 2018, pp. 5265–5274.
  34. Q. Meng, S. Zhao, Z. Huang et al., “Magface: A universal representation for face recognition and quality assessment,” in CVPR, 2021, pp. 14 225–14 234.
  35. S. Ge, S. Zhao, C. Li et al., “Low-resolution face recognition in the wild via selective knowledge distillation,” IEEE TIP, pp. 2051–2062, 2018.
  36. ——, “Efficient low-resolution face recognition via bridge distillation,” IEEE TIP, vol. 29, pp. 6898–6908, 2020.
  37. S. Ge, K. Zhang, H. Liu et al., “Look one and more: Distilling hybrid order relational knowledge for cross-resolution image recognition,” in AAAI, 2020, pp. 10 845–10 852.
  38. K. Zhang, C. Zhanga, S. Li et al., “Student network learning via evolutionary knowledge distillation,” IEEE TCSVT, vol. 32, no. 4, pp. 2251–2263, 2022.
  39. P. Li, S. Tu, and L. Xu, “Deep rival penalized competitive learning for low-resolution face recognition,” Neural Networks, vol. 148, pp. 183–193, 2022.
  40. K. Zhang, S. Ge, R. Shi et al., “Low-resolution object recognition with cross-resolution relational contrastive distillation,” IEEE TCSVT, vol. 34, no. 4, pp. 2374–2384, 2024.
  41. F. V. Massoli, G. Amato, and F. Falchi, “Cross-resolution learning for face recognition,” IVS, vol. 99, p. 103927, 2020.
  42. N. Komodakis and S. Zagoruyko, “Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer,” in ICLR, 2017. [Online]. Available: https://openreview.net/forum?id=Sks9_ajex
  43. S. Shin, J. Lee, J. Lee et al., “Teaching where to look: Attention similarity knowledge distillation for low resolution face recognition,” in ECCV, 2022, pp. 631–647.
  44. Z. Wang, S. Chang, Y. Yang et al., “Studying very low resolution recognition using deep networks,” in CVPR, 2016, pp. 4792–4800.
  45. W. Liu, Y. Wen, Z. Yu et al., “Sphereface: Deep hypersphere embedding for face recognition,” in CVPR, 2017, pp. 212–220.
  46. V. Talreja, F. Taherkhani, M. C. Valenti et al., “Attribute-guided coupled gan for cross-resolution face recognition,” in BTAS, 2019, pp. 1–10.
  47. Q. Cao, L. Shen, W. Xie et al., “Vggface2: A dataset for recognising faces across pose and age,” in FG, 2018, pp. 67–74.
  48. P. Li, L. Prieto, D. Mery et al., “On low-resolution face recognition in the wild: Comparisons and new techniques,” IEEE TIFS, pp. 2000–2012, 2019.
  49. M. Singh, S. Nagpal, R. Singh et al., “Dual directed capsule network for very low resolution image recognition,” in ICCV, 2019, pp. 340–349.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.