Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DocTr: Document Transformer for Structured Information Extraction in Documents (2307.07929v1)

Published 16 Jul 2023 in cs.CV

Abstract: We present a new formulation for structured information extraction (SIE) from visually rich documents. It aims to address the limitations of existing IOB tagging or graph-based formulations, which are either overly reliant on the correct ordering of input text or struggle with decoding a complex graph. Instead, motivated by anchor-based object detectors in vision, we represent an entity as an anchor word and a bounding box, and represent entity linking as the association between anchor words. This is more robust to text ordering, and maintains a compact graph for entity linking. The formulation motivates us to introduce 1) a DOCument TRansformer (DocTr) that aims at detecting and associating entity bounding boxes in visually rich documents, and 2) a simple pre-training strategy that helps learn entity detection in the context of language. Evaluations on three SIE benchmarks show the effectiveness of the proposed formulation, and the overall approach outperforms existing solutions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Haofu Liao (34 papers)
  2. Aruni RoyChowdhury (9 papers)
  3. Weijian Li (39 papers)
  4. Ankan Bansal (15 papers)
  5. Yuting Zhang (30 papers)
  6. Zhuowen Tu (80 papers)
  7. Ravi Kumar Satzoda (5 papers)
  8. R. Manmatha (31 papers)
  9. Vijay Mahadevan (16 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.