Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Multi-modal Text Recognition (2405.14259v3)

Published 23 May 2024 in cs.CL and cs.AI

Abstract: We introduce "Generative Fusion Decoding" (GFD), a novel shallow fusion framework, utilized to integrate LLMs into multi-modal text recognition systems such as automatic speech recognition (ASR) and optical character recognition (OCR). We derive the formulas necessary to enable GFD to operate across mismatched token spaces of different models by mapping text token space to byte token space, enabling seamless fusion during the decoding process. The framework is plug-and-play, compatible with various auto-regressive models, and does not require re-training for feature alignment, thus overcoming limitations of previous fusion techniques. We highlight three main advantages of GFD: First, by simplifying the complexity of aligning different model sample spaces, GFD allows LLMs to correct errors in tandem with the recognition model, reducing computation latencies. Second, the in-context learning ability of LLMs is fully capitalized by GFD, increasing robustness in long-form speech recognition and instruction aware speech recognition. Third, GFD enables fusing recognition models deficient in Chinese text recognition with LLMs extensively trained on Chinese. Our evaluation demonstrates that GFD significantly improves performance in ASR and OCR tasks, with ASR reaching state-of-the-art in the NTUML2021 benchmark. GFD provides a significant step forward in model integration, offering a unified solution that could be widely applicable to leveraging existing pre-trained models through step by step fusion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chan-Jan Hsu (16 papers)
  2. Yi-Chang Chen (14 papers)
  3. Feng-Ting Liao (8 papers)
  4. Pei-Chen Ho (2 papers)
  5. Yu-Hsiang Wang (7 papers)
  6. Da-shan Shiu (27 papers)
  7. Po-chun Hsu (25 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.