ICDAR 2019 Competition on Large-scale Street View Text with Partial Labeling -- RRC-LSVT (1909.07741v1)
Abstract: Robust text reading from street view images provides valuable information for various applications. Performance improvement of existing methods in such a challenging scenario heavily relies on the amount of fully annotated training data, which is costly and in-efficient to obtain. To scale up the amount of training data while keeping the labeling procedure cost-effective, this competition introduces a new challenge on Large-scale Street View Text with Partial Labeling (LSVT), providing 50, 000 and 400, 000 images in full and weak annotations, respectively. This competition aims to explore the abilities of state-of-the-art methods to detect and recognize text instances from large-scale street view images, closing the gap between research benchmarks and real applications. During the competition period, a total of 41 teams participated in the two proposed tasks with 132 valid submissions, i.e., text detection and end-to-end text spotting. This paper includes dataset descriptions, task definitions, evaluation protocols and results summaries of the ICDAR 2019-LSVT challenge.
- Yipeng Sun (20 papers)
- Zihan Ni (3 papers)
- Chee-Kheng Chng (5 papers)
- Yuliang Liu (82 papers)
- Canjie Luo (20 papers)
- Chun Chet Ng (6 papers)
- Junyu Han (53 papers)
- Errui Ding (156 papers)
- Jingtuo Liu (36 papers)
- Dimosthenis Karatzas (80 papers)
- Chee Seng Chan (50 papers)
- Lianwen Jin (116 papers)