Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Winner Team Mia at TextVQA Challenge 2021: Vision-and-Language Representation Learning with Pre-trained Sequence-to-Sequence Model (2106.15332v1)

Published 24 Jun 2021 in cs.CV

Abstract: TextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions. In this challenge, we use generative model T5 for TextVQA task. Based on pre-trained checkpoint T5-3B from HuggingFace repository, two other pre-training tasks including masked LLMing(MLM) and relative position prediction(RPP) are designed to better align object feature and scene text. In the stage of pre-training, encoder is dedicate to handle the fusion among multiple modalities: question text, object text labels, scene text labels, object visual features, scene visual features. After that decoder generates the text sequence step-by-step, cross entropy loss is required by default. We use a large-scale scene text dataset in pre-training and then fine-tune the T5-3B with the TextVQA dataset only.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yixuan Qiao (10 papers)
  2. Hao Chen (1006 papers)
  3. Jun Wang (991 papers)
  4. Yihao Chen (40 papers)
  5. Xianbin Ye (6 papers)
  6. Ziliang Li (8 papers)
  7. Xianbiao Qi (38 papers)
  8. Peng Gao (402 papers)
  9. Guotong Xie (31 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.