Rectified Diffusion Guidance for Conditional Generation (2410.18737v1)
Abstract: Classifier-Free Guidance (CFG), which combines the conditional and unconditional score functions with two coefficients summing to one, serves as a practical technique for diffusion model sampling. Theoretically, however, denoising with CFG cannot be expressed as a reciprocal diffusion process, which may consequently leave some hidden risks during use. In this work, we revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution. To rectify this issue, we propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory. We further show that our approach enjoys a closed-form solution given the guidance strength. That way, the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected. Empirical evidence on real-world data demonstrate the compatibility of our post-hoc design with existing state-of-the-art diffusion models, including both class-conditioned ones (e.g., EDM2 on ImageNet) and text-conditioned ones (e.g., SD3 on CC12M), without any retraining. We will open-source the code to facilitate further research.
- Classifier-free guidance is a predictor-corrector. arXiv preprint arXiv:2408.09000, 2024.
- Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In CVPR, 2021.
- Pixart-$\alpha$: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In ICLR, 2024.
- Ilvr: Conditioning method for denoising diffusion probabilistic models. In ICCV, 2021.
- Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
- Diffusion models beat GANs on image synthesis. In NeurIPS, 2021.
- Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In ICML, 2023.
- Scaling rectified flow transformers for high-resolution image synthesis. arXiv preprint arXiv:2403.03206, 2024.
- CLIPScore: a reference-free evaluation metric for image captioning. In EMNLP, 2021.
- Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017.
- Classifier-free diffusion guidance. In NeurIPSW, 2021.
- Denoising diffusion probabilistic models. In NeurIPS, 2020.
- Composer: Creative and controllable image synthesis with composable conditions. In ICML, 2023.
- Guiding a diffusion model with a bad version of itself. arXiv preprint arXiv:2406.02507, 2024a.
- Analyzing and improving the training dynamics of diffusion models. In CVPR, 2024b.
- Variational diffusion models. In NeurIPS, 2021.
- Improved precision and recall metric for assessing generative models. arXiv preprint arXiv:1904.06991, 2019.
- Microsoft coco: Common objects in context. arXiv preprint arXiv:1405.0312, 2015.
- SDXL: Improving latent diffusion models for high-resolution image synthesis. In ICLR, 2024.
- Learning transferable visual models from natural language supervision. In ICML, 2021.
- High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
- Projected gans converge faster. In NIPS, 2021.
- Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015.
- Denoising diffusion implicit models. In ICLR, 2021.
- Score-based generative modeling through stochastic differential equations. In ICLR, 2020.