Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis (2506.06276v1)

Published 6 Jun 2025 in cs.CV, cs.AI, and cs.LG

Abstract: We present STARFlow, a scalable generative model based on normalizing flows that achieves strong performance in high-resolution image synthesis. The core of STARFlow is Transformer Autoregressive Flow (TARFlow), which combines the expressive power of normalizing flows with the structured modeling capabilities of Autoregressive Transformers. We first establish the theoretical universality of TARFlow for modeling continuous distributions. Building on this foundation, we introduce several key architectural and algorithmic innovations to significantly enhance scalability: (1) a deep-shallow design, wherein a deep Transformer block captures most of the model representational capacity, complemented by a few shallow Transformer blocks that are computationally efficient yet substantially beneficial; (2) modeling in the latent space of pretrained autoencoders, which proves more effective than direct pixel-level modeling; and (3) a novel guidance algorithm that significantly boosts sample quality. Crucially, our model remains an end-to-end normalizing flow, enabling exact maximum likelihood training in continuous spaces without discretization. STARFlow achieves competitive performance in both class-conditional and text-conditional image generation tasks, approaching state-of-the-art diffusion models in sample quality. To our knowledge, this work is the first successful demonstration of normalizing flows operating effectively at this scale and resolution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jiatao Gu (84 papers)
  2. Tianrong Chen (21 papers)
  3. David Berthelot (18 papers)
  4. Huangjie Zheng (34 papers)
  5. Yuyang Wang (111 papers)
  6. Ruixiang Zhang (69 papers)
  7. Laurent Dinh (19 papers)
  8. Miguel Angel Bautista (24 papers)
  9. Josh Susskind (38 papers)
  10. Shuangfei Zhai (50 papers)

Summary

STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis

The presented paper, authored by Jiatao Gu et al., introduces STARFlow, a scalable methodology for high-resolution image synthesis using normalizing flows. STARFlow builds on the concept of Transformer Autoregressive Flow (TARFlow), which synergizes normalizing flows with autoregressive transformer architectures—a process that has shown promising results in generative modeling.

Core Innovations and Contributions

STARFlow is distinguished by its architectural and algorithmic advancements which are geared towards enhancing the scalability and performance of normalizing flows:

  1. Deep-Shallow Architecture Design: The architecture is configured to allocate significant model capacity in deep transformer blocks, followed by computationally inexpensive shallow blocks. This strategy not only optimizes computational efficiency but also enhances modeling capabilities by focusing most parameters on the stages closest to the prior distribution.
  2. Latent Space Learning: STARFlow departs from direct pixel modeling and instead leverages the latent space of pretrained autoencoders. This choice dramatically improves the generative quality of the model, especially for high-resolution inputs, as evidenced by empirical evaluations.
  3. Novel Guidance Algorithm: A new guidance method is introduced, which notably improves sample quality, particularly under conditions requiring high guidance weights. This enhancement supports both class-conditioned and text-to-image generation tasks.

Theoretical Contributions

The paper lays theoretical foundations asserting the expressivity of autoregressive flows by establishing their universality in modeling continuous distributions with multiple flow blocks. The universality proposition (as sketched in Section 1.4 of the paper) offers insights into how stacked autoregressive flows can serve as a comprehensive modeling approach.

Empirical Results

The empirical part of the paper demonstrates STARFlow's competitive performance across several benchmarks in image synthesis tasks. Specifically, STARFlow achieves favorable outcomes in both class-conditioned and text-conditional image generation, rivalling state-of-the-art diffusion models despite the inherent scalability and training efficiency of normalizing flows.

  • ImageNet Evaluations: On the ImageNet 256x256 benchmark, STARFlow reports an FID of 2.40, demonstrating significant improvements over previous normalizing flow models such as TARFlow.
  • Text-to-Image Evaluations: For COCO 2017 zero-shot generation, STARFlow records an FID of 9.1, solidifying its capability in generating high-quality images conditioned on textual descriptions.

Implications and Future Directions

The advancements in STARFlow suggest substantial potential for normalizing flows in scalable, high-resolution generative tasks. Further exploration could address the integration of joint latent and NF model designs, which were highlighted as limitations. Moreover, optimizing inference speed and expanding beyond image generation to modalities such as video synthesis or 3D scene modeling are potential avenues for future research. The implications of STARFlow are profound, offering a promising alternative to traditional paradigms and opening pathways for diverse applications in AI-driven image synthesis.

Youtube Logo Streamline Icon: https://streamlinehq.com