Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Diverse Representations for Open Set Recognition (2401.06521v1)

Published 12 Jan 2024 in cs.CV

Abstract: Open set recognition (OSR) requires the model to classify samples that belong to closed sets while rejecting unknown samples during test. Currently, generative models often perform better than discriminative models in OSR, but recent studies show that generative models may be computationally infeasible or unstable on complex tasks. In this paper, we provide insights into OSR and find that learning supplementary representations can theoretically reduce the open space risk. Based on the analysis, we propose a new model, namely Multi-Expert Diverse Attention Fusion (MEDAF), that learns diverse representations in a discriminative way. MEDAF consists of multiple experts that are learned with an attention diversity regularization term to ensure the attention maps are mutually different. The logits learned by each expert are adaptively fused and used to identify the unknowns through the score function. We show that the differences in attention maps can lead to diverse representations so that the fused representations can well handle the open space. Extensive experiments are conducted on standard and OSR large-scale benchmarks. Results show that the proposed discriminative method can outperform existing generative models by up to 9.5% on AUROC and achieve new state-of-the-art performance with little computational cost. Our method can also seamlessly integrate existing classification models. Code is available at https://github.com/Vanixxz/MEDAF.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Towards Open Set Deep Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1563–1572.
  2. Adversarial Reciprocal Points Learning for Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11): 8065–8081.
  3. Learning open set network with discriminative reciprocal points. In Proceedings of the European Conference on Computer Vision, 507–522.
  4. Elements of Information Theory. John Wiley & Sons.
  5. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 248–255.
  6. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11): 665–673.
  7. Generative Adversarial Nets. In Proceedings of the Conference on Neural Information Processing Systems, 2672–2680.
  8. Conditional Variational Capsule Network for Open Set Recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 103–111.
  9. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 770–778.
  10. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In Proceedings of the International Conference on Learning Representations, 1–12.
  11. Class-Specific Semantic Reconstruction for Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4): 4214–4228.
  12. OpenGAN: Open-Set Recognition Via Open Data Generation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–10.
  13. Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. Technical report.
  14. Towards open-set text recognition via label-to-prototype learning. Pattern Recognition, 134: 109109.
  15. Markman, E. M. 1979. Realizing that you don’t understand: Elementary school children’s awareness of inconsistencies. Child development, 643–655.
  16. Difficulty-aware simulator for open set recognition. In Proceedings of the European Conference on Computer Vision, 365–381.
  17. Open Set Learning with Counterfactual Images. In Proceedings of the European Conference on Computer Vision, 620–635.
  18. Reading Digits in Natural Images with Unsupervised Feature Learning. In Neural Information Processing Systems, 1–9.
  19. C2AE: Class Conditioned Auto-Encoder for Open-Set Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2302–2311.
  20. Tiny ImageNet Visual Recognition Challenge. CS 231N.
  21. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3): 211–252.
  22. Toward Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7): 1757–1772.
  23. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. In Proceedings of the International Conference on Learning Representations, 1–12.
  24. Conditional Gaussian Distribution Learning for Open Set Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13477–13486.
  25. Open-Set Recognition: A Good Closed-Set Classifier is All You Need. In Proceedings of the International Conference on Learning Representations, 1–14.
  26. Deep Visual Attention Prediction. IEEE Transactions on Image Processing, 27(5): 2368–2378.
  27. Contrastive Open Set Recognition. In AAAI Conference on Artificial Intelligence, 1–11.
  28. Convolutional Prototype Network for Open Set Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(5): 2358–2370.
  29. Classification-reconstruction learning for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4016–4025.
  30. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv:1506.03365.
  31. Learning deep features for discriminative localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2921–2929.
  32. Learning Placeholders for Open-Set Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4399–4408.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yu Wang (939 papers)
  2. Junxian Mu (2 papers)
  3. Pengfei Zhu (76 papers)
  4. Qinghua Hu (83 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com