Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Self-Supervised Learning Can be Used for Fine-Grained Head Pose Estimation? (2108.04893v6)

Published 10 Aug 2021 in cs.CV and cs.LG

Abstract: The cost of head pose labeling is the main challenge of improving the fine-grained Head Pose Estimation (HPE). Although Self-Supervised Learning (SSL) can be a solution to the lack of huge amounts of labeled data, its efficacy for fine-grained HPE is not yet fully explored. This study aims to assess the usage of SSL in fine-grained HPE based on two scenarios: (1) using SSL for weights pre-training procedure, and (2) leveraging auxiliary SSL losses besides HPE. We design a Hybrid Multi-Task Learning (HMTL) architecture based on the ResNet50 backbone in which both strategies are applied. Our experimental results reveal that the combination of both scenarios is the best for HPE. Together, the average error rate is reduced up to 23.1% for AFLW2000 and 14.2% for BIWI benchmark compared to the baseline. Moreover, it is found that some SSL methods are more suitable for transfer learning, while others may be effective when they are considered as auxiliary tasks incorporated into supervised learning. Finally, it is shown that by using the proposed HMTL architecture, the average error is reduced with different types of initial weights: random, ImageNet and SSL pre-trained weights.

Summary

We haven't generated a summary for this paper yet.