A Comprehensive Study of ImageNet Pre-Training for Historical Document Image Analysis (1905.09113v1)
Abstract: Automatic analysis of scanned historical documents comprises a wide range of image analysis tasks, which are often challenging for machine learning due to a lack of human-annotated learning samples. With the advent of deep neural networks, a promising way to cope with the lack of training data is to pre-train models on images from a different domain and then fine-tune them on historical documents. In the current research, a typical example of such cross-domain transfer learning is the use of neural networks that have been pre-trained on the ImageNet database for object recognition. It remains a mostly open question whether or not this pre-training helps to analyse historical documents, which have fundamentally different image properties when compared with ImageNet. In this paper, we present a comprehensive empirical survey on the effect of ImageNet pre-training for diverse historical document analysis tasks, including character recognition, style classification, manuscript dating, semantic segmentation, and content-based retrieval. While we obtain mixed results for semantic segmentation at pixel-level, we observe a clear trend across different network architectures that ImageNet pre-training has a positive effect on classification as well as content-based retrieval.
- Linda Studer (2 papers)
- Michele Alberti (20 papers)
- Vinaychandran Pondenkandath (13 papers)
- Pinar Goktepe (1 paper)
- Thomas Kolonko (2 papers)
- Andreas Fischer (54 papers)
- Marcus Liwicki (86 papers)
- Rolf Ingold (21 papers)