Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition (2209.14498v1)

Published 29 Sep 2022 in cs.CV

Abstract: Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images. We propose an attention similarity knowledge distillation approach, which transfers attention maps obtained from a high resolution (HR) network as a teacher into an LR network as a student to boost LR recognition performance. Inspired by humans being able to approximate an object's region from an LR image based on prior knowledge obtained from HR images, we designed the knowledge distillation loss using the cosine similarity to make the student network's attention resemble the teacher network's attention. Experiments on various LR face related benchmarks confirmed the proposed method generally improved recognition performances on LR settings, outperforming state-of-the-art results by simply transferring well-constructed attention maps. The code and pretrained models are publicly available in the https://github.com/gist-ailab/teaching-where-to-look.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sungho Shin (52 papers)
  2. Joosoon Lee (6 papers)
  3. Junseok Lee (30 papers)
  4. Yeonguk Yu (8 papers)
  5. Kyoobin Lee (19 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.