Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PixArt-$α$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis (2310.00426v3)

Published 30 Sep 2023 in cs.CV

Abstract: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-$\alpha$, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-LLM to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-$\alpha$'s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-$\alpha$ only takes 10.8% of Stable Diffusion v1.5's training time (675 vs. 6,250 A100 GPU days), saving nearly \$300,000 (\$26,000 vs. \$320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-$\alpha$ excels in image quality, artistry, and semantic control. We hope PIXART-$\alpha$ will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch.

Overview of PixArt-:EfficientDiffusionTransformersforPhotorealisticTexttoImageSynthesis</h2><p>ThepaperintroducesPixArt: Efficient Diffusion Transformers for Photorealistic Text-to-Image Synthesis</h2> <p>The paper introduces PixArt-, a Transformer-based diffusion model designed for photorealistic text-to-image (T2I) synthesis. The innovation primarily lies in achieving a quality of image generation that matches or surpasses the current state-of-the-art methods, such as Stable Diffusion or Imagen, while significantly reducing the computational demands and associated emissions typically required for training large-scale deep learning models.

Significant emphasis is placed on addressing the training cost and environmental footprint of existing generative models, where the authors propose a methodological shift in the training paradigm. The PixArt-modelachievescompetitiveresultswithonly12<h3class=paperheading>CoreContributions</h3><ol><li><strong>TrainingStrategyDecomposition:</strong></li></ol><p>TheT2Itaskisdecomposedintothreesubproblems:<strong>PixelDependencyLearning:</strong>Focusesonlearningtheintrinsicstructureofnaturalimages,initializedwithaclassconditionmodel.<strong>TextImageAlignmentLearning:</strong>Alignstextdescriptionswithimagecontentusingdatawithhighconceptdensity.<strong>HighAestheticQualitySynthesis:</strong>Finetunesthemodelwithaestheticallysuperiordatatoenhancevisualquality.</p><ol><li><strong>EfficientT2ITransformer:</strong>Thetechnicalarchitectureadaptsthe<ahref="https://www.emergentmind.com/topics/diffusiontransformerdit"title=""rel="nofollow"dataturbo="false"class="assistantlink">DiffusionTransformer</a>(DiT)byincorporatingcrossattentionlayersfortextualinformationinfusion,reparameterizingtoleverageImageNetpretrainedweights,andoptimizingparameterusagewithadaLNsingle,reducingcomputationalcostwhilemaintainingmodelperformance.</li><li><strong>HighInformativeData:</strong>Toimproveefficiency,theyemployadvancedautolabelingtechniquesusingtheLLaVAmodeltocreatetextimagepairswithrichsemanticcontentandaddressdataqualitylimitationsinexistingdatasets.</li></ol><h3class=paperheading>ExperimentalAnalysis</h3><p>Themodeldemonstratessuperiorperformanceacrossseveralbenchmarks:</p><ul><li><strong>FidelityandAlignment:</strong>AchievesazeroshotFIDscoreof7.32ontheCOCOdataset,performingrobustlycomparedtoothertopmodels.</li><li><strong>CompositionalCapabilities:</strong>ExcelsinT2ICompBenchmetricsincludingattributebindingandobjectrelationships,underscoringeffectivetextimagealignmentcapabilities.</li></ul><p>Despiteusingamorerestraineddatasetandastreamlinedtrainingprocess,userevaluationsfurthercorroborateitsstateoftheartsynthesisquality,showcasingsignificantpreferenceoverestablishedmodelslikeSDXL,especiallyinmaintainingsemanticalignmentwithprompts.</p><h3class=paperheading>TechnicalImplicationsandFutureWork</h3><p>PixArt model achieves competitive results with only 12% of the training time required by prior models like Stable Diffusion v1.5 and at a fraction of the cost of larger models such as RAPHAEL, positioning itself as an economically feasible alternative for academic and entrepreneurial ventures.</p> <h3 class='paper-heading'>Core Contributions</h3> <ol> <li><strong>Training Strategy Decomposition:</strong></li> </ol> <p>The T2I task is decomposed into three subproblems: - <strong>Pixel Dependency Learning:</strong> Focuses on learning the intrinsic structure of natural images, initialized with a class-condition model. - <strong>Text-Image Alignment Learning:</strong> Aligns text descriptions with image content using data with high concept density. - <strong>High Aesthetic Quality Synthesis:</strong> Fine-tunes the model with aesthetically superior data to enhance visual quality.</p> <ol> <li><strong>Efficient T2I Transformer:</strong> The technical architecture adapts the <a href="https://www.emergentmind.com/topics/diffusion-transformer-dit" title="" rel="nofollow" data-turbo="false" class="assistant-link">Diffusion Transformer</a> (DiT) by incorporating cross-attention layers for textual information infusion, re-parameterizing to leverage ImageNet-pretrained weights, and optimizing parameter usage with adaLN-single, reducing computational cost while maintaining model performance.</li> <li><strong>High-Informative Data:</strong> To improve efficiency, they employ advanced auto-labeling techniques using the LLaVA model to create text-image pairs with rich semantic content and address data quality limitations in existing datasets.</li> </ol> <h3 class='paper-heading'>Experimental Analysis</h3> <p>The model demonstrates superior performance across several benchmarks:</p> <ul> <li><strong>Fidelity and Alignment:</strong> Achieves a zero-shot FID score of 7.32 on the COCO dataset, performing robustly compared to other top models.</li> <li><strong>Compositional Capabilities:</strong> Excels in T2I-CompBench metrics including attribute binding and object relationships, underscoring effective text-image alignment capabilities.</li> </ul> <p>Despite using a more restrained dataset and a streamlined training process, user evaluations further corroborate its state-of-the-art synthesis quality, showcasing significant preference over established models like SDXL, especially in maintaining semantic alignment with prompts.</p> <h3 class='paper-heading'>Technical Implications and Future Work</h3> <p>PixArt- serves as a significant step in balancing the trade-off between resource-heavy model training and image generation quality, highlighting the potential of architectural and training innovations to improve efficiency. The demonstrated reduction in both financial and environmental costs extends an invitation to further explore similar advancements in generative modeling, suggesting a broader industry shift towards sustainable AI development.

Future research might focus on enhancing specific capabilities of the model, such as handling detailed object interactions and generating distinct textual elements, areas which the current paper acknowledges as limitations. The opportunity also lies in exploring further integrations of PixArt-withincustomizedgenerationframeworks,exemplifiedbyDreamBoothandControlNetenhancements,whichcouldbroadenitsapplicabilityacrossdiversevisualdomains.</p><p>Inconclusion,PixArt within customized generation frameworks, exemplified by DreamBooth and ControlNet enhancements, which could broaden its applicability across diverse visual domains.</p> <p>In conclusion, PixArt- not only introduces a competitive generative model in terms of performance and efficiency but also paves the way for responsible AI research and development that aligns with environmental sustainability goals. This work is seminal in its illustration of how strategic design innovations in model architecture and training methodologies can produce impactful advancements in AI with reduced resource expenditure.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Junsong Chen (13 papers)
  2. Jincheng Yu (31 papers)
  3. Chongjian Ge (23 papers)
  4. Lewei Yao (15 papers)
  5. Enze Xie (84 papers)
  6. Yue Wu (339 papers)
  7. Zhongdao Wang (36 papers)
  8. James Kwok (23 papers)
  9. Ping Luo (340 papers)
  10. Huchuan Lu (199 papers)
  11. Zhenguo Li (195 papers)
Citations (233)
Youtube Logo Streamline Icon: https://streamlinehq.com