Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transfer Learning for Fine-grained Classification Using Semi-supervised Learning and Visual Transformers (2305.10018v1)

Published 17 May 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Fine-grained classification is a challenging task that involves identifying subtle differences between objects within the same category. This task is particularly challenging in scenarios where data is scarce. Visual transformers (ViT) have recently emerged as a powerful tool for image classification, due to their ability to learn highly expressive representations of visual data using self-attention mechanisms. In this work, we explore Semi-ViT, a ViT model fine tuned using semi-supervised learning techniques, suitable for situations where we have lack of annotated data. This is particularly common in e-commerce, where images are readily available but labels are noisy, nonexistent, or expensive to obtain. Our results demonstrate that Semi-ViT outperforms traditional convolutional neural networks (CNN) and ViTs, even when fine-tuned with limited annotated data. These findings indicate that Semi-ViTs hold significant promise for applications that require precise and fine-grained classification of visual data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Manuel Lagunas (8 papers)
  2. Brayan Impata (1 paper)
  3. Victor Martinez (5 papers)
  4. Virginia Fernandez (7 papers)
  5. Christos Georgakis (1 paper)
  6. Sofia Braun (1 paper)
  7. Felipe Bertrand (1 paper)
Citations (8)

Summary

We haven't generated a summary for this paper yet.