Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction (2502.20313v1)

Published 27 Feb 2025 in cs.CV

Abstract: This work challenges the residual prediction paradigm in visual autoregressive modeling and presents FlexVAR, a new Flexible Visual AutoRegressive image generation paradigm. FlexVAR facilitates autoregressive learning with ground-truth prediction, enabling each step to independently produce plausible images. This simple, intuitive approach swiftly learns visual distributions and makes the generation process more flexible and adaptable. Trained solely on low-resolution images ($\leq$ 256px), FlexVAR can: (1) Generate images of various resolutions and aspect ratios, even exceeding the resolution of the training images. (2) Support various image-to-image tasks, including image refinement, in/out-painting, and image expansion. (3) Adapt to various autoregressive steps, allowing for faster inference with fewer steps or enhancing image quality with more steps. Our 1.0B model outperforms its VAR counterpart on the ImageNet 256$\times$256 benchmark. Moreover, when zero-shot transfer the image generation process with 13 steps, the performance further improves to 2.08 FID, outperforming state-of-the-art autoregressive models AiM/VAR by 0.25/0.28 FID and popular diffusion models LDM/DiT by 1.52/0.19 FID, respectively. When transferring our 1.0B model to the ImageNet 512$\times$512 benchmark in a zero-shot manner, FlexVAR achieves competitive results compared to the VAR 2.3B model, which is a fully supervised model trained at 512$\times$512 resolution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Siyu Jiao (8 papers)
  2. Gengwei Zhang (12 papers)
  3. Yinlong Qian (8 papers)
  4. Jiancheng Huang (22 papers)
  5. Yao Zhao (272 papers)
  6. Humphrey Shi (97 papers)
  7. Lin Ma (206 papers)
  8. Yunchao Wei (151 papers)
  9. Zequn Jie (60 papers)

Summary

Overview of FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction

The paper "FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction" introduces a novel approach to visual autoregressive (AR) modeling, aimed at addressing the limitations faced by current methodologies that rely heavily on residual prediction paradigms. This work presents FlexVAR, a framework designed to enhance the flexibility and adaptability of image generation through an autoregressive process that focuses on predicting ground-truth values at each step rather than the residuals. This departure from traditional methods is noteworthy for its simplicity and efficacy in learning visual distributions.

Key Contributions

  1. Ground-Truth Prediction Paradigm:
    • FlexVAR innovatively departs from the residual prediction paradigm, offering a method where ground-truth images are independently produced at each autoregressive step. This eliminates the need for rigid step-wise designs that limit resolution and aspect ratio capabilities, thereby increasing the flexibility of image generation tasks.
  2. Scalable Image Generation:
    • The model is trained solely on low-resolution images (up to 256px), yet it is capable of generating images with varying resolutions and aspect ratios, which can exceed the training resolution. This is achieved without any fine-tuning, indicating a substantial generalization capacity.
  3. Enhanced Inference Efficiency:
    • The FlexVAR framework facilitates variable autoregressive steps, allowing for quicker inference using fewer steps or enhanced image quality through additional steps. This capability results in significant performance improvements on benchmarks, outperforming existing state-of-the-art autoregressive (AiM/VAR) and diffusion models (LDM/DiT) in terms of FID scores.
  4. Zero-Shot Transferability:
    • One of the remarkable aspects of FlexVAR is its ability to perform zero-shot transfer on different image generation tasks, including image refinement, in/out-painting, and expansion. This broadens the practical applicability of the model without requiring retraining or extensive adjustments.

Numerical and Comparative Strengths

FlexVAR, particularly the 1.0B model, demonstrates superior performance on the ImageNet 256×256 benchmark compared to its VAR counterparts, achieving a noteworthy FID of 2.08 with 13 autoregressive steps. This outperforms AiM/VAR models by 0.25/0.28 FID and diffusion models LDM/DiT by 1.52/0.19 FID, respectively. Additionally, FlexVAR shows competitive results in zero-shot transfer tasks, even when its 1.0B model is transferred to ImageNet 512×512, performing admirably against the VAR 2.3B model.

Methodological Innovations

  • Scalable VQVAE Tokenizer: The introduction of a new VQVAE tokenizer with multi-scale constraints allows for effective image reconstruction across arbitrary resolutions, enhancing robustness to various latent scales.
  • Scalable 2D Positional Embeddings: These embeddings, equipped with learnable queries initialized with 2D sine-cosine weights, facilitate scale-wise autoregressive modeling adaptable to multiple resolutions and steps beyond those used in training.

Implications and Future Directions

The FlexVAR framework has significant implications for the field of AI in image processing, suggesting that ground-truth prediction can simplify and potentially improve the autoregressive modeling process. Moreover, the approach promotes greater flexibility and efficiency, allowing models to generalize across tasks and resolutions. Future developments may explore the application of this paradigm in other domains, such as video modeling, or its integration with other generative frameworks to further enhance the quality and scope of AI-generated content. Furthermore, the potential for scaling up these models while maintaining or improving efficiency and flexibility warrants further investigation.

Overall, the paper provides a substantial contribution to visual autoregressive modeling, likely influencing subsequent advancements and innovations in the domain of computer vision.