Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SelfDoc: Self-Supervised Document Representation Learning (2106.03331v1)

Published 7 Jun 2021 in cs.CV and cs.CL

Abstract: We propose SelfDoc, a task-agnostic pre-training framework for document image understanding. Because documents are multimodal and are intended for sequential reading, our framework exploits the positional, textual, and visual information of every semantically meaningful component in a document, and it models the contextualization between each block of content. Unlike existing document pre-training models, our model is coarse-grained instead of treating individual words as input, therefore avoiding an overly fine-grained with excessive contextualization. Beyond that, we introduce cross-modal learning in the model pre-training phase to fully leverage multimodal information from unlabeled documents. For downstream usage, we propose a novel modality-adaptive attention mechanism for multimodal feature fusion by adaptively emphasizing language and vision signals. Our framework benefits from self-supervised pre-training on documents without requiring annotations by a feature masking training strategy. It achieves superior performance on multiple downstream tasks with significantly fewer document images used in the pre-training stage compared to previous works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Peizhao Li (18 papers)
  2. Jiuxiang Gu (73 papers)
  3. Jason Kuen (32 papers)
  4. Vlad I. Morariu (31 papers)
  5. Handong Zhao (38 papers)
  6. Rajiv Jain (20 papers)
  7. Varun Manjunatha (23 papers)
  8. Hongfu Liu (38 papers)
Citations (150)