Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-modal In-Context Learning Makes an Ego-evolving Scene Text Recognizer (2311.13120v3)

Published 22 Nov 2023 in cs.CV

Abstract: Scene text recognition (STR) in the wild frequently encounters challenges when coping with domain variations, font diversity, shape deformations, etc. A straightforward solution is performing model fine-tuning tailored to a specific scenario, but it is computationally intensive and requires multiple model copies for various scenarios. Recent studies indicate that LLMs can learn from a few demonstration examples in a training-free manner, termed "In-Context Learning" (ICL). Nevertheless, applying LLMs as a text recognizer is unacceptably resource-consuming. Moreover, our pilot experiments on LLMs show that ICL fails in STR, mainly attributed to the insufficient incorporation of contextual information from diverse samples in the training stage. To this end, we introduce E$2$STR, a STR model trained with context-rich scene text sequences, where the sequences are generated via our proposed in-context training strategy. E$2$STR demonstrates that a regular-sized model is sufficient to achieve effective ICL capabilities in STR. Extensive experiments show that E$2$STR exhibits remarkable training-free adaptation in various scenarios and outperforms even the fine-tuned state-of-the-art approaches on public benchmarks. The code is released at https://github.com/bytedance/E2STR .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zhen Zhao (85 papers)
  2. Jingqun Tang (22 papers)
  3. Chunhui Lin (9 papers)
  4. Binghong Wu (12 papers)
  5. Hao Liu (497 papers)
  6. Zhizhong Zhang (42 papers)
  7. Xin Tan (63 papers)
  8. Can Huang (43 papers)
  9. Yuan Xie (188 papers)
Citations (6)