Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GENIE: Higher-Order Denoising Diffusion Solvers (2210.05475v1)

Published 11 Oct 2022 in stat.ML and cs.LG

Abstract: Denoising diffusion models (DDMs) have emerged as a powerful class of generative models. A forward diffusion process slowly perturbs the data, while a deep model learns to gradually denoise. Synthesis amounts to solving a differential equation (DE) defined by the learnt model. Solving the DE requires slow iterative solvers for high-quality generation. In this work, we propose Higher-Order Denoising Diffusion Solvers (GENIE): Based on truncated Taylor methods, we derive a novel higher-order solver that significantly accelerates synthesis. Our solver relies on higher-order gradients of the perturbed data distribution, that is, higher-order score functions. In practice, only Jacobian-vector products (JVPs) are required and we propose to extract them from the first-order score network via automatic differentiation. We then distill the JVPs into a separate neural network that allows us to efficiently compute the necessary higher-order terms for our novel sampler during synthesis. We only need to train a small additional head on top of the first-order score network. We validate GENIE on multiple image generation benchmarks and demonstrate that GENIE outperforms all previous solvers. Unlike recent methods that fundamentally alter the generation process in DDMs, our GENIE solves the true generative DE and still enables applications such as encoding and guided sampling. Project page and code: https://nv-tlabs.github.io/GENIE.

Citations (93)

Summary

  • The paper introduces GENIE, a higher-order ODE solver that leverages truncated Taylor methods to reduce synthesis steps in diffusion models.
  • It employs Jacobian-vector products distilled into a streamlined neural network to efficiently compute advanced gradient information.
  • Experimental results on CIFAR-10, LSUN, and ImageNet validate GENIE's ability to maintain high quality while significantly speeding up synthesis.

Overview of GENIE: Higher-Order Denoising Diffusion Solvers

The paper "GENIE: Higher-Order Denoising Diffusion Solvers" presents an innovative advancement in the field of denoising diffusion models (DDMs), which have become prominent for their ability to produce high-quality synthetic data across image, video, and other domains. DDMs operate using a forward process that introduces noise into data, followed by a reverse process guided by a learned score function to denoise the data. Typically, the reverse process follows an ordinary differential equation (ODE) or a stochastic differential equation (SDE), requiring iterative solvers for synthesis.

The focus of this research is to address the inefficiency in the reverse synthesis process of DDMs, which requires numerous evaluations to achieve quality results. Traditional methods like DDIM utilize Euler's method for solving the reverse-time ODE. This paper introduces GENIE, a higher-order solver derived from truncated Taylor methods, to significantly accelerate the synthesis process while maintaining or improving sample quality.

Key Technical Contributions

GENIE implements a second-order ODE solver, leveraging higher-order score functions that take advantage of gradients beyond the first order. The solver requires the calculation of Jacobian-vector products (JVPs), obtainable through automatic differentiation from the existing first-order score network. GENIE distills these JVP computations into a separate streamlined neural network, allowing efficient synthesis during inference without substantial computational overhead.

The strategy behind GENIE involves capturing the local curvature of the ODE's gradient field, enabling larger and more accurate step sizes compared to the linear extrapolation used in methods such as DDIM. This approach distances itself from current methods that heavily alter the generation process or utilize approximations prone to inaccuracies during synthesis.

Experimental Results

It validates GENIE across various image generation benchmarks, demonstrating superior performance in solving the generative ODE of DDMs using fewer synthesis steps than alternatives. A notable advantage is GENIE's ability to maintain compatibility with applications such as encoding and classifier-guided sampling, unlike some recent methods that deviate from the generative ODE framework. The results across datasets such as CIFAR-10, LSUN, and ImageNet confirm GENIE's efficacy, particularly in the few-step synthesis regime where traditional solvers lag in performance.

Implications and Future Directions

GENIE has broad implications for practical DDM applications, including real-time and low-latency scenarios requiring quick synthesis without sacrificing quality. By leveraging higher-order gradients, GENIE addresses a critical bottleneck in the deployment of DDMs in resource-constrained environments. The specific architecture improvements and training methodologies proposed could catalyze further research into higher-order diffusion models and their solvers.

The introduction of GENIE provides a foundation for exploring even deeper integration of higher-order techniques in generative models. Future work could extend this approach to more complex data types, exploit even higher-order derivatives for further speed improvements, and integrate with novel synthesis strategies that optimize beyond current capabilities.

Overall, GENIE marks a significant advancement in the generative modeling landscape by providing a robust solution to the prominent challenge of efficient synthesis in diffusion models, aligning quality, speed, and operational compatibility.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com