Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Distillation for 6D Pose Estimation by Aligning Distributions of Local Predictions (2205.14971v2)

Published 30 May 2022 in cs.CV and cs.LG

Abstract: Knowledge distillation facilitates the training of a compact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose estimation. In this work, we introduce the first knowledge distillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D keypoints or dense representations, and that the compact student network typically struggles to predict such local quantities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher's \emph{distribution} of local predictions into the student network, facilitating its training. Our experiments on several benchmarks show that our distillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shuxuan Guo (5 papers)
  2. Yinlin Hu (22 papers)
  3. Jose M. Alvarez (90 papers)
  4. Mathieu Salzmann (185 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.