Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MatchVIE: Exploiting Match Relevancy between Entities for Visual Information Extraction (2106.12940v1)

Published 24 Jun 2021 in cs.CV and cs.AI

Abstract: Visual Information Extraction (VIE) task aims to extract key information from multifarious document images (e.g., invoices and purchase receipts). Most previous methods treat the VIE task simply as a sequence labeling problem or classification problem, which requires models to carefully identify each kind of semantics by introducing multimodal features, such as font, color, layout. But simply introducing multimodal features couldn't work well when faced with numeric semantic categories or some ambiguous texts. To address this issue, in this paper we propose a novel key-value matching model based on a graph neural network for VIE (MatchVIE). Through key-value matching based on relevancy evaluation, the proposed MatchVIE can bypass the recognitions to various semantics, and simply focuses on the strong relevancy between entities. Besides, we introduce a simple but effective operation, Num2Vec, to tackle the instability of encoded values, which helps model converge more smoothly. Comprehensive experiments demonstrate that the proposed MatchVIE can significantly outperform previous methods. Notably, to the best of our knowledge, MatchVIE may be the first attempt to tackle the VIE task by modeling the relevancy between keys and values and it is a good complement to the existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Guozhi Tang (8 papers)
  2. Lele Xie (8 papers)
  3. Lianwen Jin (116 papers)
  4. Jiapeng Wang (22 papers)
  5. Jingdong Chen (61 papers)
  6. Zhen Xu (76 papers)
  7. Yaqiang Wu (12 papers)
  8. Hui Li (1004 papers)
  9. QianYing Wang (27 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.