Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Open-Set Image Recognition (2312.15571v1)

Published 25 Dec 2023 in cs.CV

Abstract: Open-set image recognition (OSR) aims to both classify known-class samples and identify unknown-class samples in the testing set, which supports robust classifiers in many realistic applications, such as autonomous driving, medical diagnosis, security monitoring, etc. In recent years, open-set recognition methods have achieved more and more attention, since it is usually difficult to obtain holistic information about the open world for model training. In this paper, we aim to summarize the up-to-date development of recent OSR methods, considering their rapid development in recent two or three years. Specifically, we firstly introduce a new taxonomy, under which we comprehensively review the existing DNN-based OSR methods. Then, we compare the performances of some typical and state-of-the-art OSR methods on both coarse-grained datasets and fine-grained datasets under both standard-dataset setting and cross-dataset setting, and further give the analysis of the comparison. Finally, we discuss some open issues and possible future directions in this community.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (156)
  1. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint. arXiv:1409.1556, 2014.
  2. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
  3. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  4. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  5. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in AAAI Conference on Artificial Intelligence, 2017.
  6. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  7. K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, 2016.
  8. S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint. arXiv:1605.07146, 2016.
  9. S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  10. G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  11. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint. arXiv:2010.11929, 2020.
  12. L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z. Jiang, F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” in IEEE International Conference on Computer Vision, 2021.
  13. W. Wang, E. Xie, X. Li, D. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in IEEE International Conference on Computer Vision, 2021.
  14. K. Han, A. Xiao, E. Wu, J. Guo, C. Xu, and Y. Wang, “Transformer in transformer,” in Conference and Workshop on Neural Information Processing Systems, 2021.
  15. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in IEEE International Conference on Computer Vision, 2021.
  16. J. Shao, X. Yang, and L. Guo, “Open-set learning under covariate shift,” Machine Learning, 2022, doi:\colorblue 10.1007/s10994-022-06237-1.
  17. S. Vaze, K. Han, A. Vedaldi, and A. Zisserman, “Generalized category discovery,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  18. J. Tian, Y.-C. Hsu, Y. Shen, H. Jin, and Z. Kira, “Exploring covariate and concept shift for out-of-distribution detection,” in NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications, 2021.
  19. J. Yang, K. Zhou, Y. Li, and Z. Liu, “Generalized out-of-distribution detection: A survey,” arXiv preprint. arXiv:2110.11334, 2021.
  20. A. Bendale and T. E. Boult, “Towards open set deep networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  21. D. Miller, N. Sunderhauf, M. Milford, and F. Dayoub, “Class anchor clustering: A loss for distance-based open set recognition,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021.
  22. D. Zhou, H. Ye, and D. Zhan, “Learning placeholders for open-set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  23. R. Yoshihashi, W. Shao, R. Kawakami, S. You, M. Iida, and T. Naemura, “Classification-reconstruction learning for open-set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  24. P. Oza and V. M. Patel, “C2ae: Class conditioned autoencoder for open-set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  25. P. Perera, V. I. Morariu, R. Jain, V. Manjunatha, C. Wigington, V. Ordonez, and V. M. Patel, “Generative discriminative feature representations for open-set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  26. G. Chen, L. Qiao, Y. Shi, P. Peng, J. Li, T. Huang, S. Pu, and Y. Tian, “Learning open set network with discriminative reciprocal points,” in European Conference on Computer Vision, 2020.
  27. P. Perera and V. M. Patel, “Geometric transformation-based network ensemble for open-set recognition,” in IEEE International Conference on Multimedia & Expo, 2021.
  28. J. Jang and C. O. Kim, “Collective decision of one-vs-rest networks for open-set recognition,” IEEE Transactions on Neural Networks and Learning Systems, 2022, doi:\colorblue 10.1109/TNNLS.2022.3189996.
  29. P. Oza and V. M. Patel, “Deep cnn-based multi-task learning for open-set recognition,” arXiv preprint. arXiv:1903.03161, 2019.
  30. M. Hassen and P. K. Chan, “Learning a neural-network-based representation for open set recognition,” in Proceedings of the 2020 SIAM International Conference on Data Mining, 2020.
  31. J. Lu, Y. Xu, H. Li, Z. Cheng, and Y. Niu, “Pmal: Open set recognition via robust prototype mining,” in AAAI Conference on Artificial Intelligence, 2022.
  32. H. Huang, Y. Wang, Q. Hu, and M.-M. Cheng, “Class-specific semantic reconstruction for open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi:\colorblue 10.1109/TPAMI.2022.3200384.
  33. M. Azizmalayeri and M. H. Rohban, “Ood augmentation may be at odds with open-set recognition,” arXiv preprint. arXiv:2206.04242, 2022.
  34. J. Jia and P. K. Chan, “Mmf: A loss extension for feature learning in open set recognition,” in International Conference on Artificial Neural Networks, 2021.
  35. Y. Kodama, Y. Wang, R. Kawakami, and T. Naemura, “Open-set recognition with supervised contrastive learning,” in 2021 17th International Conference on Machine Vision and Applications (MVA), 2021.
  36. B. Xu, F. Shen, and J. Zhao, “Contrastive open set recognition,” in AAAI Conference on Artificial Intelligence, 2023.
  37. J. Park, C. Y. Low, and A. B. J. Teoh, “Divergent angular representation for open set image recognition,” IEEE Transactions on Image Processing, vol. 31, pp. 176–189, 2021.
  38. J. Sun, H. Wang, and Q. Dong, “Hierarchical attention network for open-set fine-grained image recognition,” IEEE TCSVT, 2023, doi:\colorblue 10.1109/TCSVT.2023.3325001.
  39. J. Huang, D. Prijatelj, J. Dulay, and W. Scheirer, “Measuring human perception to improve open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 9, pp. 11 382–11 389, 2023.
  40. J. Lee and G. AlRegib, “Open-set recognition with gradient-based representations,” in International Conference on Image Processing, 2021.
  41. Z. Xia, G. Dong, P. Wang, and H. Liu, “Spatial location constraint prototype loss for open set recognition,” Computer Vision and Image Understanding, vol. 229, p. 103651, 2023.
  42. W. Cho and J. Choo, “Towards accurate open-set recognition via background-class regularization,” in European Conference on Computer Vision, 2022.
  43. A. R. Dhamija, M. Günther, and T. E. Boult, “Reducing network agnostophobia,” in Conference and Workshop on Neural Information Processing Systems, 2018.
  44. S. Esmaeilpour, L. Shu, and B. Liu, “Open-set recognition via augmentation-based similarity learning,” in 1st Conference on Lifelong Learning Agents, 2022.
  45. P. Schlachter, Y. Liao, and B. Yang, “Open-set recognition using intra-class splitting,” in 2019 27th European Signal Processing Conference (EUSIPCO), 2019.
  46. H. Cevikalp and H. Saglamlar, “Polyhedral conic classifiers for computer vision applications and open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 2, pp. 608–622, 2019.
  47. Z. Wang, Q. Xu, Z. Yang, Y. He, X. Cao, and Q. Huang, “Openauc: Towards auc-oriented open-set recognition,” in Conference and Workshop on Neural Information Processing Systems, 2022.
  48. G. Jiang, P. Zhu, Y. Wang, and Q. Hu, “Openmix+: Revisiting data augmentation for open set recognition,” IEEE TCSVT, 2023, doi:\colorblue 10.1109/TCSVT.2023.3268680.
  49. R. K. Baghbaderani, Y. Qu, H. Qi, and C. Stutts, “Representative-discriminative learning for open-set land cover classification of satellite imagery,” in European Conference on Computer Vision, 2020.
  50. Z. Liu, Y. Fu, Q. Pan, and Z. Zhang, “Orientational distribution learning with hierarchical spatial attention for open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 7, pp. 8757–8772, 2023.
  51. W. Dai, W. Diao, X. Sun, Y. Zhang, L. Zhao, J. Li, and K. Fu, “Camv: Class activation mapping value towards open set fine-grained recognition,” IEEE Access, vol. 9, pp. 8167–8177, 2021.
  52. S. Vaze, K. Han, A. Vedaldi, and A. Zisserman, “Open-set recognition: a good closed-set classifier is all you need?” in International Conference on Learning Representations, 2022.
  53. H. Yang, X. Zhang, F. Yin, and C. Liu, “Robust classification with convolutional prototype learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  54. H. Yang, X. Zhang, F. Yin, Q. Yang, and C. Liu, “Convolutional prototype network for open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2358–2370, 2020.
  55. A. Cao, Y. Luo, and D. Klabjan, “Open-set recognition with gaussian mixture variational autoencoders,” in AAAI Conference on Artificial Intelligence, 2021.
  56. J. Liu, J. Tian, W. Han, Z. Qin, Y. Fan, and J. Shao, “Learning multiple gaussian prototypes for open-set recognition,” Information Sciences, vol. 626, pp. 738–753, 2023.
  57. H. Zhang, A. Li, J. Guo, and Y. Guo, “Hybrid models for open set recognition,” in European Conference on Computer Vision, 2020.
  58. X. Sun, Z. Yang, C. Zhang, K. Ling, and G. Peng, “Conditional gaussian distribution learning for open set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  59. Y. Guo, G. Camporese, W. Yang, A. Sperduti, and L. Ballan, “Conditional variational capsule network for open set recognition,” in IEEE International Conference on Computer Vision, 2021.
  60. X. Sun, H. Ding, C. Zhang, G. Lin, and K.-V. Ling, “M2iosr: Maximal mutual information open set recognition,” arXiv preprint. arXiv:2108.02373, 2021.
  61. X. Sun, C. Zhang, G. Lin, and K.-V. Ling, “Open set recognition with conditional probabilistic generative models,” arXiv preprint. arXiv:2008.05129, 2020.
  62. J. Sun, H. Wang, and Q. Dong, “MoEP-AE: Autoencoding Mixtures of Exponential Power Distributions for Open-Set Recognition,” IEEE TCSVT, 2022, doi:\colorblue 10.1109/TCSVT.2022.3200112.
  63. S. Kong and D. Ramanan, “Opengan: Open-set recognition via open data generation,” in European Conference on Computer Vision, 2021.
  64. G. Chen, P. Peng, X. Wang, and Y. Tian, “Adversarial reciprocal points learning for open set recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  65. Z. Ge, S. Demyanov, and R. Garnavi, “Generative openmax for multi-class open set classification,” in British Machine Vision Conference, 2017.
  66. L. Neal, M. Olson, X. Fern, W. Wong, and F. Li, “Open set learning with counterfactual images,” in European Conference on Computer Vision, 2018.
  67. Z. Fang, J. Lu, A. Liu, F. Liu, and G. Zhang, “Learning bounds for open-set learning,” in Proceedings of the 38th International Conference on Machine Learning, 2021.
  68. I. Jo, J. Kim, H. Kang, Y.-D. Kim, and S. Choi, “Open set recognition by regularising classifier with fake data generated by generative adversarial networks,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.
  69. W. Moon, J. Park, H. S. Seong, C.-H. Cho, and J.-P. Heo, “Difficulty-aware simulator for open set recognition,” in European Conference on Computer Vision, 2022.
  70. Z. Xia, P. Wang, G. Dong, and H. Liu, “Adversarial kinetic prototype framework for open set recognition,” IEEE Transactions on Neural Networks and Learning Systems, 2023, doi:\colorblue 10.1109/TNNLS.2022.3231924.
  71. S. Li and F. Yang, “Open-set recognition with dual probability learning,” in International Conference on Neural Information Processing, 2021.
  72. Z. Yue, T. Wang, Q. Sun, X. Hua, and H. Zhang, “Counterfactual zero-shot and open-set visual recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  73. X. Zhou, T. Tang, Y. Cui, and G. Kuang, “Sar open set recognition based on counterfactual framework,” 2022 Photonics & Electromagnetics Research Symposium (PIERS), 2022, doi:\colorblue 10.1109/PIERS55526.2022.9793029.
  74. F. Yang, “Icausalosr: Invertible causal disentanglement for open set recognition,” https://ssrn.com/abstract=4203672, 2022, posted: 2022-08-29.
  75. C. Geng, S. Huang, and S. Chen, “Recent advances in open set recognition: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 10, pp. 3614–3631, 2020.
  76. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” in International Conference on Learning Representations, 2018.
  77. V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, and Y. Bengio, “Self-attention generative adversarial networks,” in Proceedings of the 36th International Conference on Machine Learning, 2019.
  78. P. Grother and K. Hanaoka, “Nist special database 19 handprinted forms and characters 2nd edition,” 2016, Technical report.
  79. A. Krizhevsky and G. Hinton, “Convolutional deep belief networks on cifar-10,” 2010, Technical report.
  80. D. Hendrycks, M. Mazeika, and T. Dietterich, “Deep anomaly detection with outlier exposure,” in International Conference on Learning Representations, 2019.
  81. W. Liu, X. Wang, J. D. Owens, and Y. Li, “Energy-based out-of-distribution detection,” in Conference and Workshop on Neural Information Processing Systems, 2020.
  82. Y. Li and N. Vasconcelos, “Background data resampling for outlier-aware classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020, pp. 13 218–13 227.
  83. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint. arXiv:1312.6114, 2013.
  84. C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, “How to train deep variational autoencoders and probabilistic ladder networks,” in Proceedings of the 33rd International Conference on Machine Learning, 2016.
  85. A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint. arXiv:1511.05644, 2015.
  86. G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming auto-encoders,” in International Conference on Artificial Neural Networks, 2011, pp. 44–51.
  87. B. Zong, Q. Song, M. R. Min, W. Cheng, C. Lumezanu, D. Cho, and H. Chen, “Deep autoencoding gaussian mixture model for unsupervised anomaly detection,” in International Conference on Learning Representations, 2018.
  88. N. Dilokthanakul, P. A. Mediano, M. Garnelo, M. C. Lee, H. Salimbeni, K. Arulkumaran, and M. Shanahan, “Deep unsupervised clustering with gaussian mixture variational autoencoders,” arXiv preprint. arXiv:1611.02648, 2016.
  89. K. Sohn, H. Lee, and X. Yan, “Learning structured output representation using deep conditional generative models,” in Conference and Workshop on Neural Information Processing Systems, 2015.
  90. L. Dinh, D. Krueger, and Y. Bengio, “Nice: Non-linear independent components estimation,” in International Conference on Learning Representations, 2015.
  91. D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” in Conference and Workshop on Neural Information Processing Systems, 2018.
  92. J. Behrmann, W. Grathwohl, R. T. Q. Chen, D. Duvenaud, and J.-H. Jacobsen, “Invertible residual networks,” in Proceedings of the 36th International Conference on Machine Learning, 2019.
  93. J.-H. Jacobsen, A. Smeulders, and E. Oyallon, “i-revnet: Deep invertible networks,” in International Conference on Learning Representations, 2018.
  94. J. Pearl, “Theoretical impediments to machine learning with seven sparks from the causal revolution,” arXiv preprint. arXiv:1801.04016, 2018.
  95. S. Narayan, A. Gupta, F. S. Khan, C. G. M. Snoek, and L. Shao, “Latent embedding feedback and discriminative features for zero-shot classification,” in European Conference on Computer Vision, 2020.
  96. J. Song, C. Shen, Y. Yang, Y. Liu, and M. Song, “Transductive unbiased embedding for zero-shot learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  97. L. Bo, Q. Dong, and Z. Hu, “Hardness sampling for self-training based transductive zero-shot learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  98. G. Qi, H. Yu, Z. Lu, and S. Li, “Transductive few-shot classification on the oblique manifold,” in IEEE International Conference on Computer Vision, 2021.
  99. Y. Ma, S. Bai, S. An, W. Liu, A. Liu, X. Zhen, and X. Liu, “Transductive relation-propagation network for few-shot learning,” in International Joint Conference on Artificial Intelligence, 2020.
  100. O. Sener, H. O. Song, A. Saxena, and S. Savarese, “Learning transferrable representations for unsupervised domain adaptation,” in Conference and Workshop on Neural Information Processing Systems, 2016.
  101. J. Liang, D. Hu, and J. Feng, “Domain adaptation with auxiliary target domain-oriented classifier,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  102. C. A. Gadde and C. Jawahar, “Transductive weakly-supervised player detection using soccer broadcast videos,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022.
  103. Y. Yang, H. Wei, Z. Sun, G. Li, Y. Zhou, H. Xiong, and J. Yang, “S2osc: A holistic semi-supervised approach for open set classification,” ACM Transactions on Knowledge Discovery from Data, vol. 16, no. 34, pp. 1–27, 2022.
  104. J. Sun and Q. Dong, “Conditional feature generation for transductive open-set recognition via dual-space consistent sampling,” Pattern Recognition, 2023, doi:\colorblue 10.1016/j.patcog.2023.110046.
  105. P. Perera, P. Oza, and V. M. Patel, “One-class classification: A survey,” arXiv preprint. arXiv:2101.03064, 2021.
  106. S. Goyal, A. Raghunathan, M. Jain, H. V. Simhadri, and P. Jain, “Drocc: Deep robust one-class classification,” in Proceedings of the 37th International Conference on Machine Learning, 2019.
  107. S. S. Khan and M. G. Madden, “A survey of recent trends in one class classification,” in Irish Conference on Artificial Intelligence and Cognitive Science, 2009.
  108. L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, E. M. Alexander Binder, and M. Kloft, “Deep one-class classification,” in Proceedings of the 35th International Conference on Machine Learning, 2018.
  109. W. Hu, M. Wang, Q. Qin, J. Ma, and B. Liu, “Hrn: A holistic approach to one class learning,” in Conference and Workshop on Neural Information Processing Systems, 2020.
  110. I. Golan and R. El-Yaniv, “Deep anomaly detection using geometric transformations,” in Conference and Workshop on Neural Information Processing Systems, 2018.
  111. D. Hendrycks, M. Mazeika, S. Kadavath, and D. Song, “Using self-supervised learning can improve model robustness and uncertainty,” in Conference and Workshop on Neural Information Processing Systems, 2019.
  112. P. Perera, R. Nallapati, and B. Xiang, “Ocgan: One-class novelty detection using gans with constrained latent representations,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  113. M. Z. Zaheer, J.-H. Lee, M. Astrid, and S.-I. Lee, “Old is gold: Redefining the adversarially learned one-class classifier training paradigm,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  114. A. Bendale and T. Boult, “Towards open-world recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  115. T. Cao, Y. Wang, Y. Xing, T. Xiao, T. He, Z. Zhang, H. Zhou, and J. Tighe, “Pss: Progressive sample selection for open-world visual representation learning,” in European Conference on Computer Vision, 2022.
  116. Q. Wu, C. Yang, and J. Yan, “Towards open-world feature extrapolation: An inductive graph learning approach,” in Conference and Workshop on Neural Information Processing Systems, 2021.
  117. Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu, “Large-scale long-tailed recognition in an open world,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  118. Z. Liu, Z. Miao, X. Zhan, J. Wang, B. Gong, and S. X. Yu, “Open long-tailed recognition in a dynamic world,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi:\colorblue 10.1109/TPAMI.2022.3200091.
  119. J. Cai, Y. Wang, H.-M. Hsu, J.-N. Hwang, K. Magrane, and C. Rose, “Luna: Localizing unfamiliarity near acquaintance for open-set long-tailed recognition,” in AAAI Conference on Artificial Intelligence, 2022.
  120. P. P. Busto and J. Gall, “Open set domain adaptation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  121. K. Saito, S. Yamamoto, Y. Ushiku, and T. Harada, “Open set domain adaptation by backpropagation,” in European Conference on Computer Vision, 2018.
  122. H. Liu, Z. Cao, M. Long, J. Wang, and Q. Yang, “Separate to adapt: Open set domain adaptation via progressive separation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  123. B. Liu, H. Kang, H. Li, G. Hua, and N. Vasconcelos, “Few-shot open-set recognition using meta-learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  124. M. Jeong, S. Choi, and C. Kim, “Few-shot open-set recognition by transformation consistency,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  125. H. Wang, G. Pang, P. Wang, L. Zhang, W. Wei, and Y. Zhang, “Glocal energy-based learning for few-shot open-set recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  126. R. Shao, P. Perera, P. C. Yuen, and V. M. Patel, “Open-set adversarial defense,” in European Conference on Computer Vision, 2020.
  127. R. Shao, P. Perera and P. C. Yuen, and V. M. Patel, “Open-set adversarial defense with clean-adversarial mutual learning,” International Journal of Computer Vision, vol. 130, pp. 1070–1087, 2022.
  128. Y. Wang, W. Liu, X. Ma, J. Bailey, H. Zha, L. Song, and S. Xia, “Iterative learning with open-set noisy labels,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  129. R. Sachdeva, F. R. Cordeiro, V. Belagiannis, I. Reid, and G. Carneiro, “Evidentialmix: Learning with combined open-set and closed-set noisy labels,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021.
  130. M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” in Conference and Workshop on Neural Information Processing Systems, 2018.
  131. H. Wei, L. Tao, R. C. Xie, and B. An, “Using self-supervised learning can improve model robustness and uncertainty,” in Conference and Workshop on Neural Information Processing Systems, 2021.
  132. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in Conference and Workshop on Neural Information Processing Systems, 2011.
  133. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009, Technical report.
  134. Y. Le and X. Yang, “Tiny imagenet visual recognition challenge,” 2015, CS 231N.
  135. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, pp. 211–252, 2015.
  136. S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” in International Conference on Learning Representations, 2018.
  137. F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao, “Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop,” arXiv preprint. arXiv:1506.03365, 2015.
  138. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The CaltechUCSD Birds-200-2011 Dataset,” California Institute of Technology, Tech. Rep., 2011.
  139. S. Maji, E. Rahtu, J. Kannala, M. Blaschko, and A. Vedaldi, “Fine-grained visual classification of aircraft,” arXiv preprint. arXiv:1306.5151, 2013.
  140. J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations for fine-grained categorization,” in IEEE International Conference on Computer Vision, 2013.
  141. V. M. Sloutsky, “From perceptual categories to concepts: What develops?” Cognitive Science, vol. 34, no. 7, pp. 1244–1286, 2010.
  142. C. A. Seger and E. K. Miller, “Category learning in the brain,” Annual Review of Neuroscience, vol. 33, pp. 203–219, 2010.
  143. D. J. Freedman and E. K. Miller, “Neural mechanisms of visual categorization: insights from neurophysiology,” Neuroscience & Biobehavioral Reviews, vol. 32, no. 2, pp. 311–329, 2008.
  144. S. Yin, C. Fu, S. Zhao, K. Li, X. Sun, T. Xu, and E. Chen, “A survey on multimodal large language models,” arXiv preprint. arXiv:2306.13549, 2023.
  145. T. Gupta and A. Kembhavi, “Visual programming: Compositional visual reasoning without training,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  146. R. Zhang, X. Hu, B. Li, S. Huang, H. Deng, Y. Qiao, P. Gao, and H. Li, “Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  147. M. F. Naeem, M. G. Z. A. Khan, Y. Xian, M. Z. Afzal, D. Stricker, L. V. Gool, and F. Tombari, “I2mvformer: Large language model generated multi-view document supervision for zero-shot image classification,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  148. S. Pratt, I. Covert, R. Liu, and A. Farhadi, “What does a platypus look like? generating customized prompts for zero-shot image classification,” in IEEE International Conference on Computer Vision, 2023.
  149. Z. Lin, S. Yu, Z. Kuang, D. Pathak, and D. Ramanan, “Multimodality helps unimodality: Cross-modal few-shot learning with multimodal models,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  150. H. Qu, X. Hui, Y. Cai, and J. Liu, “Lmc: Large model collaboration with cross-assessment for training-free open-set object recognition,” arXiv preprint. arXiv:2309.12780, 2023.
  151. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe, “Training language models to follow instructions with human feedback,” in Conference and Workshop on Neural Information Processing Systems, 2022.
  152. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generationk,” in Proceedings of the 38th International Conference on Machine Learning, 2021.
  153. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever, “Learning transferable visual models from natural language supervision,” in Proceedings of the 38th International Conference on Machine Learning, 2021.
  154. M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging properties in self-supervised vision transformers,” in IEEE International Conference on Computer Vision, 2021.
  155. N. Liao, X. Zhang, M. Cao, Q. Tian, and J. Yan, “R-tuning: Regularized prompt tuning in open-set scenarios,” arXiv preprint. arXiv:2303.05122, 2023.
  156. G. A. Miller, “Wordnet: a lexical database for english,” Communications of the ACM, vol. 38, no. 11, pp. 39–41, 1995.
Citations (2)

Summary

We haven't generated a summary for this paper yet.