Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient End-to-End Visual Document Understanding with Rationale Distillation (2311.09612v2)

Published 16 Nov 2023 in cs.CV and cs.CL

Abstract: Understanding visually situated language requires interpreting complex layouts of textual and visual elements. Pre-processing tools, such as optical character recognition (OCR), can map document image inputs to textual tokens, then LLMs can reason over text. However, such methods have high computational and engineering complexity. Can small pretrained image-to-text models accurately understand visual documents through similar recognition and reasoning steps instead? We propose Rationale Distillation (RD), which incorporates the outputs of OCR tools, LLMs, and larger multimodal models as intermediate "rationales", and trains a small student model to predict both rationales and answers. On three visual document understanding benchmarks representing infographics, scanned documents, and figures, our Pix2Struct (282M parameters) student model finetuned with RD outperforms the base model by 4-5% absolute accuracy with only 1% higher computational cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wang Zhu (17 papers)
  2. Alekh Agarwal (99 papers)
  3. Mandar Joshi (24 papers)
  4. Robin Jia (59 papers)
  5. Jesse Thomason (65 papers)
  6. Kristina Toutanova (31 papers)
Citations (2)