Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents (1905.13538v2)

Published 27 May 2019 in cs.IR, cs.CV, cs.LG, and stat.ML

Abstract: We present a new dataset for form understanding in noisy scanned documents (FUNSD) that aims at extracting and structuring the textual content of forms. The dataset comprises 199 real, fully annotated, scanned forms. The documents are noisy and vary widely in appearance, making form understanding (FoUn) a challenging task. The proposed dataset can be used for various tasks, including text detection, optical character recognition, spatial layout analysis, and entity labeling/linking. To the best of our knowledge, this is the first publicly available dataset with comprehensive annotations to address FoUn task. We also present a set of baselines and introduce metrics to evaluate performance on the FUNSD dataset, which can be downloaded at https://guillaumejaume.github.io/FUNSD/.

Citations (315)

Summary

  • The paper introduces a novel dataset of 199 annotated forms that capture diverse layouts and noise challenges.
  • It employs state-of-the-art techniques for text detection, spatial layout analysis, and semantic entity linking to benchmark form understanding.
  • Results indicate that data-driven models outperform traditional OCR, emphasizing the need for adaptive, template-agnostic approaches.

Insights into the FUNSD Dataset for Form Understanding in Noisy Scanned Documents

The paper presents the Form Understanding Dataset (FUNSD), a comprehensive resource for advancing document intelligence in particularly challenging conditions—noisy scanned forms. By providing a collection of 199 well-annotated forms that encompass a broad variety of structures and content, the FUNSD dataset emerges as a significant resource for developing and evaluating techniques in form understanding (FoUn).

The dataset targets the automatic extraction and structuring of information from forms, a task that goes beyond optical character recognition (OCR) by requiring accurate spatial layout analysis and entity relationship understanding. The dataset is designed to support a range of document-processing tasks, such as text detection, OCR, spatial layout analysis, and entity labeling and linking.

Unique Contributions

Among the key contributions of this dataset is its diversity and complexity, which stem from real-world forms that exhibit various types of noise and layout variations. The importance of such a dataset is grounded in its ability to push the boundaries of current FoUn methods, which require high adaptability to diverse document formats found across multiple domains such as marketing, scientific reporting, and advertising.

The introduction of the FUNSD dataset also comes with specifically defined metrics and baselines for performance evaluation. Significantly, the authors provide four baselines for text detection and test state-of-the-art methodologies alongside retrained models on these baselines to establish performance benchmarks.

Evaluation of Current Methods

The baseline results indicate that while traditional methods like Tesseract lag in performance, data-driven approaches such as Faster R-CNN yield the best results in word-level text detection. Moreover, Google Vision demonstrates strong generalization capabilities, with significant outcome improvements, underscoring the need for sophisticated models that adapt well without extensive retraining.

In the task of word grouping, considered a clustering problem, baseline performances were expectedly low, highlighting the necessity for learned algorithms that integrate spatial and semantic information. Semantic entity labeling, conducted using a multi-layer perceptron, achieves moderate success suggesting potential areas for improvement via more nuanced entity representation and classification techniques.

Implications and Future Directions

The presented dataset and its extensive annotation provide a basis for the development of robust, template-agnostic FoUn systems. The task-driven structure of the dataset allows for incremental advancements in each sub-component, including text detection, recognition, and form-specific relationship analysis. Indeed, enhancing the accuracy of systems trained on FUNSD could lead to substantial improvements in practical applications like digital form archiving, automated data entry, and form-based business analytics.

Looking forward, the introduction of handwritten components and real-time processing capabilities could enhance system robustness. Integrating advances in neural representation learning and graph-based relational modeling will likely drive the future of high-fidelity form understanding.

In conclusion, the FUNSD dataset is a critical resource anchoring future research and applications in form understanding by providing a challenging and representative benchmark for evaluating the effectiveness of proposed methodologies in the field of noisy document processing.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub