Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlimited Knowledge Distillation for Action Recognition in the Dark (2308.09327v1)

Published 18 Aug 2023 in cs.CV

Abstract: Dark videos often lose essential information, which causes the knowledge learned by networks is not enough to accurately recognize actions. Existing knowledge assembling methods require massive GPU memory to distill the knowledge from multiple teacher models into a student model. In action recognition, this drawback becomes serious due to much computation required by video process. Constrained by limited computation source, these approaches are infeasible. To address this issue, we propose an unlimited knowledge distillation (UKD) in this paper. Compared with existing knowledge assembling methods, our UKD can effectively assemble different knowledge without introducing high GPU memory consumption. Thus, the number of teaching models for distillation is unlimited. With our UKD, the network's learned knowledge can be remarkably enriched. Our experiments show that the single stream network distilled with our UKD even surpasses a two-stream network. Extensive experiments are conducted on the ARID dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ruibing Jin (5 papers)
  2. Guosheng Lin (158 papers)
  3. Min Wu (201 papers)
  4. Jie Lin (142 papers)
  5. Zhengguo Li (41 papers)
  6. Xiaoli Li (120 papers)
  7. Zhenghua Chen (51 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.