Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Text Repetitions and Denoising Autoencoders in OCR Post-correction (1906.10907v1)

Published 26 Jun 2019 in cs.CL

Abstract: A common approach for improving OCR quality is a post-processing step based on models correcting misdetected characters and tokens. These models are typically trained on aligned pairs of OCR read text and their manually corrected counterparts. In this paper we show that the requirement of manually corrected training data can be alleviated by estimating the OCR errors from repeating text spans found in large OCR read text corpora and generating synthetic training examples following this error distribution. We use the generated data for training a character-level neural seq2seq model and evaluate the performance of the suggested model on a manually corrected corpus of Finnish newspapers mostly from the 19th century. The results show that a clear improvement over the underlying OCR system as well as previously suggested models utilizing uniformly generated noise can be achieved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kai Hakala (2 papers)
  2. Aleksi Vesanto (1 paper)
  3. Niko Miekka (1 paper)
  4. Tapio Salakoski (9 papers)
  5. Filip Ginter (28 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.