Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Features Improve Open-World Learning (2102.07848v2)

Published 15 Feb 2021 in cs.CV and cs.LG

Abstract: This paper identifies the flaws in existing open-world learning approaches and attempts to provide a complete picture in the form of \textbf{True Open-World Learning}. We accomplish this by proposing a comprehensive generalize-able open-world learning protocol capable of evaluating various components of open-world learning in an operational setting. We argue that in true open-world learning, the underlying feature representation should be learned in a self-supervised manner. Under this self-supervised feature representation, we introduce the problem of detecting unknowns as samples belonging to Out-of-Label space. We differentiate between Out-of-Label space detection and the conventional Out-of-Distribution detection depending upon whether the unknowns being detected belong to the native-world (same as feature representation) or a new-world, respectively. Our unifying open-world learning framework combines three individual research dimensions, which typically have been explored independently, i.e., Incremental Learning, Out-of-Distribution detection and Open-World Learning. Starting from a self-supervised feature space, an open-world learner has the ability to adapt and specialize its feature space to the classes in each incremental phase and hence perform better without incurring any significant overhead, as demonstrated by our experimental results. The incremental learning component of our pipeline provides the new state-of-the-art on established ImageNet-100 protocol. We also demonstrate the adaptability of our approach by showing how it can work as a plug-in with any of the self-supervised feature representation methods.

Citations (13)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub