Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
22 tokens/sec
GPT-5 High Premium
21 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
459 tokens/sec
Kimi K2 via Groq Premium
230 tokens/sec
2000 character limit reached

On Evaluation of Document Classification using RVL-CDIP (2306.12550v1)

Published 21 Jun 2023 in cs.CL

Abstract: The RVL-CDIP benchmark is widely used for measuring performance on the task of document classification. Despite its widespread use, we reveal several undesirable characteristics of the RVL-CDIP benchmark. These include (1) substantial amounts of label noise, which we estimate to be 8.1% (ranging between 1.6% to 16.9% per document category); (2) presence of many ambiguous or multi-label documents; (3) a large overlap between test and train splits, which can inflate model performance metrics; and (4) presence of sensitive personally-identifiable information like US Social Security numbers (SSNs). We argue that there is a risk in using RVL-CDIP for benchmarking document classifiers, as its limited scope, presence of errors (state-of-the-art models now achieve accuracy error rates that are within our estimated label error rate), and lack of diversity make it less than ideal for benchmarking. We further advocate for the creation of a new document classification benchmark, and provide recommendations for what characteristics such a resource should include.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (87)
  1. Detecting data errors: Where are we and what needs to be done? Proc. VLDB Endow., 9(12):993–1004.
  2. Cutting the error by half: Investigation of very deep cnn and advanced training strategies for document image classification. In Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (ICDAR).
  3. Miltiadis Allamanis. 2019. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.
  4. DocFormer: End-to-end transformer for document understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
  5. Multimodal deep networks for text and image-based document classification. In Proceedings of the Conférence Nationale sur les Applications Pratiques de l’Intelligence Artificielle (APIA).
  6. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61–83.
  7. Neural codes for image retrieval. In Proceedings of the European Conference on Computer Vision (ECCV).
  8. Wukong-Reader: Multi-modal pre-training for fine-grained visual document understanding. arXiv preprint arXiv:2212.09621.
  9. VLCDoC: Vision-language contrastive pre-training model for cross-modal document classification. arXiv preprint arXiv:2205.12029.
  10. Björn Barz and Joachim Denzler. 2020. Do we train on test data? Purging CIFAR of near-duplicates. Journal of Imaging, 6(6).
  11. Frédéric Béchet and Christian Raymond. 2018. Is ATIS too shallow to go deeper for benchmarking spoken language understanding models? In Proceedings of InterSpeech.
  12. OCR-IDL: OCR annotations for industry document library dataset. arXiv preprint arXiv:2202.12985.
  13. Extracting training data from large language models. In Proceedings of the 30th USENIX Security Symposium.
  14. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL).
  15. Detecting label errors by using pre-trained language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP).
  16. Decades of deceit: Document discovery in the Minnesota tobacco litigation. William Mitchell Law Review, 25(2).
  17. Data quality for software vulnerability datasets. arXiv preprint arXiv:2301.05456.
  18. What is the right way to represent document images? arXiv preprint arXiv:1603.01076.
  19. Document image classification with intra-domain transfer learning and stacked generalization of deep convolutional neural networks. In Proceedings of the 24th International Conference on Pattern Recognition (ICPR).
  20. Modular multimodal architecture for document classification. arXiv preprint arXiv:1912.04376.
  21. End-to-end document recognition and understanding with Dessurt. arXiv preprint arXiv:2203.16618v3.
  22. MATrIX - modality-aware transformer for information extraction. arXiv preprint arXiv:2205.08094v1.
  23. Memorization vs. generalization : Quantifying data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
  24. Improving accuracy and speeding up document image classification through parallel systems. In Proceedings of the International Conference on Computational Science (ICCS).
  25. Improving text-to-SQL evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL).
  26. Catherine Finegan-Dollak and Ashish Verma. 2020. Layout-aware text representations harm clustering documents by type. In Proceedings of the First Workshop on Insights from Negative Results in NLP.
  27. The Cigarette Papers. University of California Press.
  28. Unified pretraining framework for document understanding. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS).
  29. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
  30. Evaluation of deep convolutional nets for document image classification and retrieval. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR).
  31. Do models of mental health based on social media data generalize? In Findings of the Association for Computational Linguistics: EMNLP 2020.
  32. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  33. LayoutLMv3: Pre-training for document ai with unified text and image masking. In Proceedings of the 30th ACM International Conference on Multimedia.
  34. Funsd: A dataset for form understanding in noisy scanned documents. In 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW).
  35. EmmDocClassifier: Efficient multimodal document image classifier for scarce data. Applied Sciences, 12(3).
  36. OCR-free document understanding transformer. arXiv preprint arXiv:2111.15664.
  37. Hyunjae Kim and Jaewoo Kang. 2022. How do your biomedical named entity recognition models generalize to novel entities? IEEE Access.
  38. WILDS: A benchmark of in-the-wild distribution shifts. In Proceedings of the 2021 International Conference on Machine Learning (ICML).
  39. Quality at a glance: An audit of web-crawled multilingual datasets. Transactions of the Association for Computational Linguistics, 10:50–72.
  40. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems.
  41. Structural similarity for document image classification and retrieval. Pattern Recognition Letters, 43.
  42. Evaluating out-of-distribution performance on document image classifiers. In Proceedings of the Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
  43. Outlier detection for improved data quality and diversity in dialog systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL).
  44. Building a test collection for complex document information processing. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.
  45. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
  46. StructuralLM: Structural pre-training for form understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP).
  47. DiT: Self-supervised pre-training for document image transformer. In Proceedings of the 30th ACM International Conference on Multimedia.
  48. SelfDoc: Self-supervised document representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  49. Bi-VLDoc: Bidirectional vision-language modeling for visually-rich document understanding. arXiv preprint arXiv:2206.13155.
  50. Shifts: A dataset of real distributional shift across multiple large-scale tasks. arXiv preprint arXiv:2107.07455.
  51. DocVQA: A dataset for VQA on document images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
  52. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL).
  53. Nicolas M. Müller and Karla Markert. 2019. Identifying mislabeled instances in classification datasets. In Proceedings of the International Joint Conference on Neural Networks (IJCNN).
  54. Tab this folder of documents: Page stream segmentation of business documents. In Proceedings of the ACM Symposium on Document Engineering (DocEng).
  55. Skim-attention: Learning to focus via document layout. In Findings of the Association for Computational Linguistics: EMNLP 2021.
  56. Jingcheng Niu and Gerald Penn. 2019. Rationally reappraising ATIS-based dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL).
  57. Pervasive label errors in test sets destabilize machine learning benchmarks. In 35th Conference on Neural Information Processing Systems Track on Datasets and Benchmarks.
  58. Confident learning: Estimating uncertainty in dataset labels. Journal of Artificial Intelligence Research (JAIR), 70:1373–1411.
  59. ERNIE-layout: Layout knowledge enhanced pre-training for visually-rich document understanding. In Findings of the Association for Computational Linguistics: EMNLP 2022.
  60. Understanding long documents with different position-aware attentions. arXiv preprint arXiv:2208.08201.
  61. Identifying mislabeled data using the area under the margin ranking. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS).
  62. Going full-TILT boogie on document understanding with text-image-layout transformer. In Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR).
  63. Towards a multi-modal, multi-task learning based pre-training framework for document representation learning. arXiv preprint arXiv:2009.14457v2.
  64. Revisiting Oxford and Paris: Large-scale image retrieval benchmarking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  65. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
  66. Do ImageNet classifiers generalize to ImageNet? In Proceedings of the 36th International Conference on Machine Learning (ICML).
  67. DocXClassifier: High performance explainable deep network for document image classification. techarxiv preprint.
  68. Are deep models robust against real distortions? A case study on document image classification. In Proceedings of the 26th International Conference on Pattern Recognition (ICPR).
  69. Ritesh Sarkhel and Arnab Nandi. 2019. Deterministic routing between layout abstractions for multi-scale classification of visually rich documents. In Proceedings of the Twenty-Eigth International Joint Conference on Artificial Intelligence (IJCAI).
  70. Building digital tobacco industry document libraries at the University of California, San Francisco Library/Center for Knowledge Management. D-Lib Magazine, 8(9).
  71. Analyzing the potential of zero-shot recognition for document image classification. In Proceedings of the 16th International Conference on Document Analysis and Recognition (ICDAR).
  72. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR).
  73. We need to talk about random splits. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
  74. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  75. Unifying vision, text, and layout for universal document processing. arXiv preprint arXiv:2212.02623.
  76. Digital archives and data science: Building programs and partnerships for health sciences research. In Handbook of Research on Academic Libraries as Partners in Data Science.
  77. Chris Tensmeyer and Tony Martinez. 2017. Analysis of convolutional neural networks for document image classification. In Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (ICDAR).
  78. LiLT: A simple yet effective language-independent layout transformer for structured document understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL).
  79. MGDoc: Pre-training with multi-granular hierarchy for document image understanding. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP).
  80. An empirical study on the overlapping problem of open-domain dialogue datasets. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (LREC).
  81. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP).
  82. LayoutLM: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD).
  83. Cecilia Ying and Stephen Thomas. 2022. Label errors in BANKING77. In Proceedings of the Third Workshop on Insights from Negative Results in NLP.
  84. StrucTexTv2: Masked visual-textual prediction for document image pre-training. In Proceedings of the 11th International Conference on Learning Representations (ICLR).
  85. Guangyu Zhu and David Doermann. 2007. Automatic document logo detection. In Proceedings of the 9th International Conference on Document Analysis and Recognition (ICDAR).
  86. Multi-scale structural saliency for signature detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  87. Multimodal side-tuning for document classification. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR).
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.