Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation (2503.16430v2)

Published 20 Mar 2025 in cs.CV

Abstract: Autoregressive visual generation models typically rely on tokenizers to compress images into tokens that can be predicted sequentially. A fundamental dilemma exists in token representation: discrete tokens enable straightforward modeling with standard cross-entropy loss, but suffer from information loss and tokenizer training instability; continuous tokens better preserve visual details, but require complex distribution modeling, complicating the generation pipeline. In this paper, we propose TokenBridge, which bridges this gap by maintaining the strong representation capacity of continuous tokens while preserving the modeling simplicity of discrete tokens. To achieve this, we decouple discretization from the tokenizer training process through post-training quantization that directly obtains discrete tokens from continuous representations. Specifically, we introduce a dimension-wise quantization strategy that independently discretizes each feature dimension, paired with a lightweight autoregressive prediction mechanism that efficiently model the resulting large token space. Extensive experiments show that our approach achieves reconstruction and generation quality on par with continuous methods while using standard categorical prediction. This work demonstrates that bridging discrete and continuous paradigms can effectively harness the strengths of both approaches, providing a promising direction for high-quality visual generation with simple autoregressive modeling. Project page: https://yuqingwang1029.github.io/TokenBridge.

Summary

  • The paper introduces TokenBridge, a novel approach that bridges continuous and discrete tokens using post-training dimension-wise quantization for autoregressive visual generation.
  • TokenBridge efficiently models the resulting large discrete token space via a dimension-wise prediction mechanism, maintaining high visual fidelity with simpler training than continuous methods.
  • Experiments demonstrate TokenBridge achieves reconstruction and generation quality comparable to state-of-the-art continuous approaches on ImageNet, offering computational efficiency and potential for future multimodal applications.

Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation

The paper "Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation" presents a novel approach, TokenBridge, that aims to reconcile the advantages of continuous and discrete token representations in the context of autoregressive visual generation models. The challenge addressed by this work is the dichotomy between discrete tokens, which are straightforward for modeling with cross-entropy loss but suffer from information loss and instability during tokenizer training, and continuous tokens, which necessitate complex distribution modeling to preserve visual details.

Technical Contributions

  • Post-Training Quantization: The authors introduce a dimension-wise quantization strategy applied to pretrained continuous tokens to derive discrete tokens post-training. This decoupling from the training phase allows for fine-grained quantization that retains high visual fidelity inherent in continuous representations, while enabling simpler categorical modeling.
  • Efficient Token Modeling: The resulting discrete tokens create an exponentially large token space, which poses a computational challenge. To address this, the paper proposes an autoregressive prediction mechanism that decomposes the prediction into a sequence of dimension-wise predictions, efficiently managing the expansive token space and capturing necessary inter-dimensional dependencies.
  • Autoregressive Generation Framework: The framework integrates a spatial autoregressive backbone with a dimension-wise token prediction head, enabling efficient and high-quality visual generation. The training leverages standard cross-entropy loss without necessitating complex distribution modeling.

Experimental Evaluation

Experiments conducted on the ImageNet dataset demonstrate that TokenBridge achieves reconstruction quality comparable to continuous methods while employing simpler autoregressive modeling approaches. In terms of generation, the method shows visual quality comparable to state-of-the-art continuous approaches such as MAR, while retaining the simplicity of discrete modeling. Quantitatively, the paper reports competitive FID and IS scores against both discrete and continuous baseline models.

Implications and Future Directions

TokenBridge offers a potentially efficient path forward for both visual and multimodal generation tasks, suggesting that the benefits of continuous token richness can be harnessed in a computationally efficient manner typical of discrete tokens. This balance could encourage the development of more unified and scalable multimodal systems based on autoregressive paradigms.

Furthermore, the efficiency of the dimension-wise autoregressive prediction introduces advantages in model complexity and computational requirements, positioning TokenBridge as a practical alternative in scenarios where computational resources are a limiting factor.

Overall, the approach presented in this paper not only bridges a significant gap in visual token representation but also sets a foundation for future explorations into more efficient and robust generative models in the field of artificial intelligence. Future work could explore leveraging advancements in continuous tokenizers and investigate the potential integration of TokenBridge into broader multimodal frameworks.

Reddit Logo Streamline Icon: https://streamlinehq.com