Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
48 tokens/sec
GPT-5 Medium
15 tokens/sec
GPT-5 High Premium
23 tokens/sec
GPT-4o
104 tokens/sec
DeepSeek R1 via Azure Premium
77 tokens/sec
GPT OSS 120B via Groq Premium
466 tokens/sec
Kimi K2 via Groq Premium
201 tokens/sec
2000 character limit reached

Zero-shot generalization across architectures for visual classification (2402.14095v4)

Published 21 Feb 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Generalization to unseen data is a key desideratum for deep networks, but its relation to classification accuracy is unclear. Using a minimalist vision dataset and a measure of generalizability, we show that popular networks, from deep convolutional networks (CNNs) to transformers, vary in their power to extrapolate to unseen classes both across layers and across architectures. Accuracy is not a good predictor of generalizability, and generalization varies non-monotonically with layer depth.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Comparing community structure identification. Journal of Statistical Mechanics: Theory and experiment, 2005(09):P09008, 2005.
  2. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  3. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  770–778, 2016.
  4. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
  5. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  10012–10022, 2021.
  6. Kauvin Lucas. Calligraphy style classification, 2021. URL https://www.kaggle.com/code/kauvinlucas/calligraphy-style-classification.
  7. Separating style and content with bilinear models. Neural computation, 12(6):1247–1283, 2000.
  8. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  568–578, 2021.
  9. Yuanhao Wang. Chinese calligraphy styles by calligraphers. https://www.kaggle.com/datasets/yuanhaowang486/chinese-calligraphy-styles-by-calligraphers, 2020.
  10. ConvNeXt V2: Co-designing and scaling convnets with masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  16133–16142, 2023.
  11. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  22–31, 2021.
  12. Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9):2251–2265, 2018.
  13. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  10819–10829, 2022.
  14. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.