Papers
Topics
Authors
Recent
Search
2000 character limit reached

Single-image driven 3d viewpoint training data augmentation for effective wine label recognition

Published 12 Apr 2024 in cs.CV and cs.LG | (2404.08820v1)

Abstract: Confronting the critical challenge of insufficient training data in the field of complex image recognition, this paper introduces a novel 3D viewpoint augmentation technique specifically tailored for wine label recognition. This method enhances deep learning model performance by generating visually realistic training samples from a single real-world wine label image, overcoming the challenges posed by the intricate combinations of text and logos. Classical Generative Adversarial Network (GAN) methods fall short in synthesizing such intricate content combination. Our proposed solution leverages time-tested computer vision and image processing strategies to expand our training dataset, thereby broadening the range of training samples for deep learning applications. This innovative approach to data augmentation circumvents the constraints of limited training resources. Using the augmented training images through batch-all triplet metric learning on a Vision Transformer (ViT) architecture, we can get the most discriminative embedding features for every wine label, enabling us to perform one-shot recognition of existing wine labels in the training classes or future newly collected wine labels unavailable in the training. Experimental results show a significant increase in recognition accuracy over conventional 2D data augmentation techniques.

Summary

  • The paper presents a novel 3D viewpoint augmentation method that produces multiple realistic training images from just one wine label photograph.
  • It details a three-step process using edge detection, line sample extraction, and perspective mapping via view-invariant cross-ratios to simulate various poses.
  • Experimental results show that a Vision Transformer model with 3D augmentation achieved a Top-1 accuracy of 91.15%, outperforming traditional 2D methods.

This paper (2404.08820) addresses the significant challenge of insufficient training data for deep learning models in complex image recognition tasks, specifically focusing on wine label recognition. Traditional methods like OCR are prone to errors with variations in text, logos, and damage. Standard 2D data augmentation (like rotation, scaling, etc.) often fails to realistically simulate the perspective changes of a label wrapped around a cylindrical bottle. Deep learning-based generation methods typically require large datasets themselves.

The paper proposes a novel 3D viewpoint augmentation technique that can generate numerous visually realistic training samples from just a single real-world image of a wine label per class. This is particularly valuable for scenarios where collecting diverse images across different viewpoints is difficult.

The core of the proposed method involves simulating the perspective transformation of a wine label on a cylindrical surface. This is achieved through a three-step process:

  1. 2D Description of 3D Surface: The method analyzes the input image to identify the elliptical curves representing the upper and lower rims of the wine label and the straight lines representing its longitudinal edges. This involves steps like converting to grayscale, identifying vertical edges, grouping edges into blocks, labeling based on gradient direction, chaining blocks, performing non-maximum suppression to thin the rims, and finally fitting ellipses to the rim pixels.
  2. Line Sample Extraction: Based on the identified longitudinal edges, their intersection provides a vanishing point (VP). The method then extracts 2D line samples across the label region by connecting points on the wider rim towards this vanishing point, extending to the smaller rim. These line samples effectively capture the content along the vertical direction of the label in the original image's perspective.
  3. Perspective Mapping (Synthesizing New Views): To generate a new viewpoint, the method identifies the vanishing point and rim locations corresponding to a desired target pose of the virtual wine bottle. It then re-projects the image pixels from the extracted line samples (Step 2) onto the corresponding line segments in the target view. This re-projection uses the principle of view-invariant cross-ratios, ensuring that the relative positions of features along each line sample are preserved realistically despite the perspective change. This process allows generating images simulating various poses (rotation and translation) of the wine bottle.

For wine label recognition, the paper employs a metric learning approach using a Vision Transformer (ViT) architecture, specifically leveraging the DINO framework initialization and training with Batch-All Triplet Loss.

  • Training: The ViT model is trained on the large dataset of augmented images generated by the 3D viewpoint method. The goal is to learn embeddings such that embeddings of images from the same wine label class are close in the embedding space, while those from different classes are far apart. Batch-All Triplet Loss helps achieve this by considering all possible valid triplets within a batch.
  • Testing: For a given test image, the same 3D viewpoint principle is applied to transform the label region into a synthesized front-view perspective. The trained ViT model then extracts the embedding feature vector for this standardized front-view image. Recognition is performed by calculating the cosine distance between this test embedding and the embeddings of the original, non-augmented front-view training images. The wine label class corresponding to the embedding with the smallest distance is the predicted label (one-shot recognition capability for new classes).

The paper uses a dataset of 885 unique wine labels, with only one image per class for training (collected from websites, often frontal views) and 3-5 images per class for testing (captured with cell phones under varying conditions). They generated 320 augmented images per training sample using controlled rotations and translations.

Experimental results demonstrate that the proposed 3D viewpoint augmentation significantly improves recognition accuracy compared to traditional 2D augmentation techniques. Using ViT-S/16 with 3D augmentation achieved a Top-1 accuracy of 91.15%, a substantial improvement (over 14%) compared to the same model with 2D augmentation (76.39%). ViT models consistently outperformed ResNet architectures for this task when combined with the proposed augmentation, attributed to ViT's ability to learn more discriminative embeddings. The study also found that replacing the black background of synthesized images with random backgrounds further boosted accuracy (from 87.88% to 91.15% Top-1 for ViT-S/16). Notably, applying additional 2D perspective transformations after the 3D augmentation did not improve and could even harm the performance of the 3D augmented model, suggesting the 3D method provides sufficient and realistic perspective variation.

The practical implication is the ability to train a robust wine label recognition system with extremely limited initial data (a single image per class), enabling effective recognition of labels seen from various real-world perspectives and facilitating the integration of new wine labels without extensive data collection and retraining.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.