Papers
Topics
Authors
Recent
2000 character limit reached

Deep Inversion (DeepInv) Method

Updated 11 January 2026
  • The paper introduces the first fully trainable, stepwise neural solver for diffusion inversion using self-supervised noise pseudo-label generation.
  • DeepInv employs multi-scale iterative training with noise-space data augmentation and residual fusion, yielding significant SSIM and PSNR improvements.
  • The framework integrates seamlessly into existing image editing pipelines to achieve controllable image editing with enhanced speed and fidelity.

Deep Inversion (DeepInv) refers to a self-supervised methodology for fast and accurate diffusion inversion, a task central to controllable image editing in diffusion-based generative models. Diffusion inversion entails reconstructing the noise trajectory that a pretrained diffusion model (e.g., DDPM, DDIM, or rectified-flow variants) would have applied to generate a particular real image. Mastery of this mapping enables targeted image editing—such as altering prompts while preserving unmodified regions—by allowing precise re-denoising under new conditions. The DeepInv approach introduces the first trainable, stepwise neural solver for this purpose, combining self-supervised noise pseudo-labeling, noise-space data augmentation, and multi-scale iterative training (Zhang et al., 4 Jan 2026).

1. Background: Diffusion Inversion and Prior Approaches

Diffusion models generate images by iteratively adding and then removing noise, learning a precise reverse process. The inversion process is the reverse of the forward sampling, mapping from image to the corresponding latent noise trajectory. Accurate diffusion inversion allows real images to be edited with high fidelity and region preservation.

Prior methods for diffusion inversion relied largely on either:

  • Iterative optimization (e.g., ReNoise [garibi2024renoise], Pan et al. [pan2023effective]): These approaches optimize the noise latent per timestep using gradient steps, but are computationally prohibitive, taking several thousand seconds per image on large datasets like COCO.
  • ODE/flow-based one-pass approximations (e.g., RF-Inv [rout2024semantic], RF-Solver [wang2024taming]): While efficient, these incur losses in reconstruction fidelity, with typical SSIM in the range 0.50–0.65.
  • The absence of ground-truth noise latents forces these existing solutions to rely on heuristic or approximate pseudo-labels, leading to a speed-quality tradeoff.

2. DeepInv Framework and Pipeline

DeepInv addresses supervision gaps through a self-supervised training paradigm, where no ground-truth noise labels are required. The central elements are:

  • Self-supervised pseudo-label generation: For each image, pseudo-noises are generated using a denoise-re-noise fusion, leveraging a pretrained diffusion model as a teacher.
  • Parameterized neural inversion solver gϕg_\phi: Predicts the noise latent ε^t\hat\varepsilon_t for each timestep tt, conditioned on the encoded image latent ztz_t^* and timestep.
  • Iterative multi-scale training: The inversion solver is progressively trained across a sequence of timestep sets (T={1,5,10,25,50}\mathcal{T}=\{1,5,10,25,50\}), with staged increases in model depth and a hybrid loss mechanism.

Training and Inference Outline

In each training iteration, a real image is encoded via a VAE, and for steps tt in the current stage:

  1. The solver predicts the inversion noise εt=gϕ(zt,t)\varepsilon_t^* = g_\phi(z_t^*, t).
  2. A teacher model denoises the augmented latent, resulting in noise εˉt=d(zt+1,t+1)\bar\varepsilon_t = d^*(z_{t+1}^*, t+1).
  3. Fusion of εˉt\bar\varepsilon_t and εt\varepsilon_t^* produces the augmented pseudo-noise label εˉt\bar\varepsilon_t^*.

Model parameters are updated to minimize the hybrid loss over these pseudo-labels. At inference, the trained solver predicts the noise trajectory for the image in a single pass.

3. Self-supervised Objective and Pseudo-label Generation

DeepInv's self-supervision relies on an explicit fixed-point consistency: for optimal inversion, the following requirement holds:

zt=d(s(zt,εt))z_t^* = d\left( s(z_t^*, -\varepsilon_t) \right)

with dd and ss denoting the diffusion denoising and noise addition steps, respectively.

Loss compositions include:

  • Self-supervision loss: Lself=εtεˉt22\mathcal{L}_{\mathrm{self}} = \|\varepsilon_t^* - \bar\varepsilon_t \|_2^2
  • Hybrid loss: Lhyb=εtεˉt22\mathcal{L}_{\mathrm{hyb}} = \|\varepsilon_t^* - \bar\varepsilon_t^*\|_2^2, where fusion is conditioned on the step index.
  • Stabilized multi-scale loss: Lstable=αLself+(1α)Lhyb\mathcal{L}_{\mathrm{stable}} = \alpha\,\mathcal{L}_{\mathrm{self}} + (1-\alpha)\,\mathcal{L}_{\mathrm{hyb}}

Pseudo-noise labels are generated without supervision: the teacher diffusion model denoises one noise-augmented latent step to yield εˉt\bar\varepsilon_t; then, for late timesteps, this is blended linearly with the solver’s current prediction to stabilize training.

4. Data Augmentation in Noise Space

DeepInv does not use conventional image augmentations (e.g., cropping, flipping). Instead, augmentation is performed in noise space via:

  • Linear interpolation between denoising noise and solver predictions: for each training timestep, εˉt\bar\varepsilon_t^* is a weighted sum of teacher noise εˉt\bar\varepsilon_t and solver prediction εt\varepsilon_t^*.
  • This procedure exposes the solver to a range of noise distributions, enhancing its robustness and mitigating overfitting to narrow pseudo-label sets.

5. Iterative Multi-scale Training and Model Scaling

Training proceeds in temporal stages, with timestep sets broadening from 1 to 50, corresponding to diffusion chain positions. This staged procedure supports:

  • Early stage learning: Coarse, global inversion at low timesteps.
  • Later stages: Progressive refinement with greater model depth; right branch layers are expanded from 5 (for k>0.6k > 0.6) to 9 (for k0.5k \le 0.5), appended via residuals.
  • At each new stage, newly added layers are trained with previous layers frozen, followed by joint fine-tuning at reduced learning rate.
  • Recurrence equations ensure the solver satisfies both denoising and fixed-point conditions at inference:
    • zt+1=s(zt,gϕ(zt,t))z_{t+1}^* = s\left( z_t^*, -g_\phi(z_t^*, t)\right)
    • ztd(zt+1,t+1)z_t^* \approx d\left( z_{t+1}^*, t+1 \right) during training.

6. Inversion Solver Architecture and Design

The core inversion network features a dual-branch structure:

  • Left branch (pretrained prior): Receives empty prompt embedding (ω\omega) and timestep embeddings, passing through text-conditional MM-DiT blocks (adopting SD3 architecture).
  • Right branch (image-conditioned refinement): Receives image latent ztz_t^* and timestep embedding TEMB(z0,t)\mathrm{TEMB}(z_0^*, t).
  • Shared pathway: Both branches take in the DDIM inversion prior (ε~t\tilde\varepsilon_t). Their outputs are merged via MM-DiT aggregation followed by a linear layer to give ε^t\hat\varepsilon_t, finalized with a residual connection:
    • ε^tε^t+ε~t\hat\varepsilon_t \leftarrow \hat\varepsilon_t + \tilde\varepsilon_t
  • This architecture separates structural prior from image cues for specialized representation, with residual connections guaranteeing that inversion performance is not worse than the DDIM baseline.

7. Empirical Evaluation and Ablation

Inversion Results on COCO

Method SSIM ↑ Time per image ↓
EasyInv [zhang2025easy] 0.643 34 s
ReNoise [garibi2024renoise] 0.451 4 746 s
DeepInv (Ours) 0.903 48 s

DeepInv improves SSIM by +40.4% over EasyInv and is 98-fold faster than ReNoise. It achieves PSNR of 29.63 dB, surpassing EasyInv (18.58 dB).

Downstream Image Editing (PIE-Bench)

DeepInv is readily integrated into existing editing pipelines. Applied to methods such as FTEdit and RF-Inv, DeepInv consistently improves SSIM and related metrics. For example, plugging DeepInv into RF-Inv raises SSIM from 0.71 to 0.86, and even inversion-free methods like DVRF show metric gains when supplied high-quality noise via DeepInv.

Ablations

  • Noise fusion: Applying DeepInv's fusion strategy to baselines improves their error marginally, but still leaves a significant gap (DeepInv: SSIM 0.90 vs. EasyInv: 0.75–0.78).
  • Layer extension: Right branch expansion from 5 to 9 layers yields small PSNR improvements (+1 dB), while unnecessary depth increases in both branches degrade performance.

A salient observation is that structural prior should remain minimally altered, with most capacity increases applied to the image-conditioned branch.

8. Insights and Future Directions

DeepInv establishes the first fully trainable, stepwise inversion solver for diffusion models. Its self-supervised pseudo-labeling and data augmentation principles suggest straightforward extension to other generative frameworks, including video diffusion. The approach is amenable to data-free or semi-supervised adaptation, such as the use of synthetic noise augmentations or domain-specific fine-tuning, for novel editing contexts. The integration of DeepInv into existing or future editing algorithms provides systematic improvements in fidelity, speed, and robustness (Zhang et al., 4 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Deep Inversion (DeepInv).