Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification (2205.12029v3)

Published 24 May 2022 in cs.CV

Abstract: Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream task. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a joint representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the joint representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generality of our model on low-scale and large-scale datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Souhail Bakkali (9 papers)
  2. Zuheng Ming (16 papers)
  3. Marçal Rusiñol (20 papers)
  4. Oriol Ramos Terrades (11 papers)
  5. Mickael Coustaty (6 papers)
Citations (27)