Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What If We Only Use Real Datasets for Scene Text Recognition? Toward Scene Text Recognition With Fewer Labels (2103.04400v2)

Published 7 Mar 2021 in cs.CV
What If We Only Use Real Datasets for Scene Text Recognition? Toward Scene Text Recognition With Fewer Labels

Abstract: Scene text recognition (STR) task has a common practice: All state-of-the-art STR models are trained on large synthetic data. In contrast to this practice, training STR models only on fewer real labels (STR with fewer labels) is important when we have to train STR models without synthetic data: for handwritten or artistic texts that are difficult to generate synthetically and for languages other than English for which we do not always have synthetic data. However, there has been implicit common knowledge that training STR models on real data is nearly impossible because real data is insufficient. We consider that this common knowledge has obstructed the study of STR with fewer labels. In this work, we would like to reactivate STR with fewer labels by disproving the common knowledge. We consolidate recently accumulated public real data and show that we can train STR models satisfactorily only with real labeled data. Subsequently, we find simple data augmentation to fully exploit real data. Furthermore, we improve the models by collecting unlabeled data and introducing semi- and self-supervised methods. As a result, we obtain a competitive model to state-of-the-art methods. To the best of our knowledge, this is the first study that 1) shows sufficient performance by only using real labels and 2) introduces semi- and self-supervised methods into STR with fewer labels. Our code and data are available: https://github.com/ku21fan/STR-Fewer-Labels

Analyzing the Efficacy of Using Only Real Datasets for Scene Text Recognition

This paper investigates the implications of training scene text recognition (STR) models exclusively on real-world datasets, with an emphasis on reducing the reliance on synthetic data. The authors present a novel perspective in the STR domain by challenging the long-held assumption that training solely on real data is unviable due to limitations in data availability.

Overview of the Approach

The paper positions itself against the conventional practice of using vast amounts of synthetic data to train state-of-the-art STR models, citing the scarcity of synthetic datasets for non-English and highly stylistic text domains. To address this, the authors consolidate a comprehensive collection of real labeled data, comprising 276K images from 11 datasets. They propose enhancing model performance through data augmentation techniques, as well as utilizing semi- and self-supervised learning methods on an additional 4.2M unlabeled real images.

Main Contributions

  1. Sufficient Real Data Compilation: The authors argue and demonstrate that the accumulated real labeled data is now sufficiently large to train STR models with competitive accuracy to those trained on synthetic datasets. Both CRNN and TRBA models, when trained solely on this real data corpus, achieved accuracies close to those obtained with synthetic data.
  2. Data Augmentation Techniques: By applying simple augmentations like Blur, Crop, and Rotation, the paper reports significant improvements in STR performance, particularly when augmenting real data. This highlights how augmentation can be pivotal in optimizing data use.
  3. Semi- and Self-Supervised Learning: Leveraging techniques such as Pseudo-Labeling and RotNet, the authors incorporate unlabeled data to further boost model performance. This strategy substitutes effectively for synthetic data, offering a viable alternative for enhancing label efficiency in real-world applications.
  4. Analysis of Data Impact: The paper explores the dynamics of accuracy as a function of the volume of training data. Notably, models trained on a mix of labeled and unlabeled real data achieved accuracy gains, implying that real data diversity can compensate for volume limitations.
  5. Practical Implications for Multi-Domain STR: The final exploration entails mixing real and synthetic datasets, providing insights into optimizing STR model training to meet application-specific constraints, where real-world data alignment is crucial.

Implications and Future Directions

This research shifts the paradigm towards the feasibility of STR with limited annotated real data, suggesting practical applications like multi-lingual text recognition. The findings imply that substantial real-world data, especially when diverse and augmented intelligently, can challenge synthetic data's dominance in STR training. As STR systems become more deployable across niche domains without needing expert-generated synthetic datasets, this approach could democratize access to robust text recognition solutions.

Future directions may explore deeper semi-supervised frameworks or investigate how minimal labeled real data, when strategically coupled with unlabeled data from relevant domains, can consistently surpass traditional synthetic data methods. Additionally, expanding these techniques to languages with less digital representation could broaden the scope and accessibility of STR technologies globally, fostering further research in STR adaptations and generalizations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jeonghun Baek (11 papers)
  2. Yusuke Matsui (35 papers)
  3. Kiyoharu Aizawa (67 papers)
Citations (81)
X Twitter Logo Streamline Icon: https://streamlinehq.com