Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 161 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 85 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis (2412.04431v2)

Published 5 Dec 2024 in cs.CV

Abstract: We present Infinity, a Bitwise Visual AutoRegressive Modeling capable of generating high-resolution, photorealistic images following language instruction. Infinity redefines visual autoregressive model under a bitwise token prediction framework with an infinite-vocabulary tokenizer & classifier and bitwise self-correction mechanism, remarkably improving the generation capacity and details. By theoretically scaling the tokenizer vocabulary size to infinity and concurrently scaling the transformer size, our method significantly unleashes powerful scaling capabilities compared to vanilla VAR. Infinity sets a new record for autoregressive text-to-image models, outperforming top-tier diffusion models like SD3-Medium and SDXL. Notably, Infinity surpasses SD3-Medium by improving the GenEval benchmark score from 0.62 to 0.73 and the ImageReward benchmark score from 0.87 to 0.96, achieving a win rate of 66%. Without extra optimization, Infinity generates a high-quality 1024x1024 image in 0.8 seconds, making it 2.6x faster than SD3-Medium and establishing it as the fastest text-to-image model. Models and codes will be released to promote further exploration of Infinity for visual generation and unified tokenizer modeling.

Citations (1)

Summary

  • The paper introduces bitwise tokenization and an Infinite-Vocabulary Classifier that exponentially expands the vocabulary while reducing parameters by up to 99.95%.
  • It incorporates a self-correction mechanism that mitigates training-test discrepancies and lowers the FID score from 9.76 to 3.48.
  • Empirical results demonstrate a 0.962 ImageReward score, 93.11% human preference, and 0.8-second inference for 1024x1024 image generation.

Evaluation of Bitwise AutoRegressive Modeling in Infinity∞ for High-Resolution Image Synthesis

The paper, "Infinity∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis," presents a novel approach to improving the capabilities of autoregressive text-to-image models, with a focus on increasing scalability, reducing inference latency, and ensuring the generation of highly detailed images. This paper elucidates the development of Infinity, a Bitwise Visual AutoRegressive Modeling framework, that differentiates itself with a bitwise token prediction strategy and integrates an Infinite-Vocabulary Classifier (IVC) and a Bitwise Self-Correction mechanism, setting new benchmarks for high-resolution image synthesis tasks.

Key Methodological Advancements

Infinity proposes several fundamental innovations:

  1. Bitwise Tokenization: The paper thoroughly discusses the limitations of index-wise tokenization and introduces bitwise modeling to address these inherent issues. By encoding image features into bitwise tokens rather than large integer indices, Infinity expands the vocabulary size exponentially, improving the reconstruction quality while maintaining a lower computational burden.
  2. Infinite-Vocabulary Classifier (IVC): Traditional models struggle with memory and optimization challenges when scaling vocabulary size. Infinity tackles this by employing a bitwise IVC, which predicts bitwise outcomes rather than index-based labels. This methodology allows a dramatic reduction in parameters (up to 99.95%) without sacrificing prediction accuracy, as evidenced by improved image fidelity and benchmark scores.
  3. Bitwise Self-Correction: Recognizing the weakness of teacher-forcing strategies, the work incorporates a self-correction mechanism that introduces random errors in training to enable the model to learn how to self-correct during inference. This is shown to reduce train-test discrepancies and considerably heighten model performance (e.g., reducing FID from 9.76 to 3.48).

Empirical Validation and Results Achieved

The empirical evaluation of Infinity demonstrates its superior performance across various benchmarks, attaining an ImageReward score of 0.962 and a significant human preference of 93.11% in relation to image relevance and quality. Additionally, Infinity boasts faster inference times—0.8 seconds for generating a 1024x1024 image—offering a 2.6× speed increase over comparable models while ensuring prompt adherence and visual quality remain robust. Moreover, Infinity's bitwise token prediction framework scales efficiently with increases in both the tokenizer's vocabulary and transformer size, enabling improved generative capacity and image detail, highlighting the model's potential for continued growth and refinement.

Implications and Future Prospects

The methodologies introduced in Infinity have significant implications, particularly for advancing the state of autoregressive models in visual generation tasks. The successful integration of bitwise tokens and scalable architecture demonstrates a compelling route for achieving near-continuous representation performance in discrete autoregressive frameworks, which could inspire further research into improving both computational efficiency and image generation quality.

Future research may explore the potential of Infinity's methods in broader AI fields or adapt bitwise tokenization to other generative contexts beyond images, such as text or audio. Additionally, examining the application of Infinity's framework in real-time generative adversarial networks (GANs) or expanding its capabilities to video generation presents promising avenues for cross-disciplinary innovation.

In summary, the Infinity model brings substantial methodological advancements and substantial empirical success, marking a noteworthy contribution to the evolution of autoregressive modeling for image synthesis. Its scalable approach and rapid inference capability may pave the path for future developments aimed at enhancing the detail, speed, and efficiency of AI-generated imagery.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 4 tweets and received 213 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com