Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation (2412.03069v1)

Published 4 Dec 2024 in cs.CV and cs.AI

Abstract: We present TokenFlow, a novel unified image tokenizer that bridges the long-standing gap between multimodal understanding and generation. Prior research attempt to employ a single reconstruction-targeted Vector Quantization (VQ) encoder for unifying these two tasks. We observe that understanding and generation require fundamentally different granularities of visual information. This leads to a critical trade-off, particularly compromising performance in multimodal understanding tasks. TokenFlow addresses this challenge through an innovative dual-codebook architecture that decouples semantic and pixel-level feature learning while maintaining their alignment via a shared mapping mechanism. This design enables direct access to both high-level semantic representations crucial for understanding tasks and fine-grained visual features essential for generation through shared indices. Our extensive experiments demonstrate TokenFlow's superiority across multiple dimensions. Leveraging TokenFlow, we demonstrate for the first time that discrete visual input can surpass LLaVA-1.5 13B in understanding performance, achieving a 7.2\% average improvement. For image reconstruction, we achieve a strong FID score of 0.63 at 384*384 resolution. Moreover, TokenFlow establishes state-of-the-art performance in autoregressive image generation with a GenEval score of 0.55 at 256*256 resolution, achieving comparable results to SDXL.

TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation

The paper "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation" introduces TokenFlow, a unified image tokenizer that aims to overcome the limitations observed in previous multimodal systems which typically rely on a single reconstruction-targeted Vector Quantization (VQ) encoder. The authors identify the need for distinct granularities of visual information for understanding and generation tasks and propose a novel dual-codebook architecture to address this issue.

This dual-codebook approach in TokenFlow separates semantic and pixel-level feature learning while maintaining alignment via a shared mapping mechanism. The design enables access to both high-level semantic representations necessary for understanding tasks and fine-grained visual features crucial for generation tasks through shared indices. Noteworthy, TokenFlow demonstrates an impressive scalability with a codebook utilization rate exceeding 95%, even with large codebooks containing over 130K entries.

Empirical evaluation of TokenFlow reveals compelling strengths across several benchmark dimensions. Notably, it surpasses the discrete LLaVA-1.5 13B model by 7.2% in multimodal understanding performance. In terms of image reconstruction, TokenFlow achieves a strong FID score of 0.63 at 384×384 resolution, and excels in autoregressive image generation with a GenEval score of 0.55 at 256×256 resolution, achieving comparable results to the state-of-the-art SDXL model.

The paper addresses a longstanding question of whether a single image tokenizer can derive representations suitable for both multimodal understanding and generation. TokenFlow effectively bridges this gap and, through its sophisticated shared mapping strategy, achieves substantial performance improvements in both tasks. The innovation of decoupling semantic and pixel-level features while allowing for their interaction presents a significant advance in the development of more efficient and versatile multimodal models.

Practically, TokenFlow paves the way for more efficient visual-linguistic preprocessing pipelines by reducing the need for separate encoders for different tasks. Theoretically, this research suggests potential for future development of integrated multimodal systems that can adapt to diverse visual data processing needs without compromising on the quality of outputs.

In conclusion, TokenFlow emerges as a robust candidate for a universal visual tokenizer framework, demonstrating notable improvements in both image understanding and generation domains. The dual-path approach and shared feature mapping may inspire further exploration into integrating multiple levels of data abstraction into unified models, potentially influencing future architectures in the field of artificial intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Liao Qu (9 papers)
  2. Huichao Zhang (9 papers)
  3. Yiheng Liu (24 papers)
  4. Xu Wang (319 papers)
  5. Yi Jiang (171 papers)
  6. Yiming Gao (26 papers)
  7. Hu Ye (6 papers)
  8. Daniel K. Du (12 papers)
  9. Zehuan Yuan (65 papers)
  10. Xinglong Wu (34 papers)
Citations (1)