Papers
Topics
Authors
Recent
2000 character limit reached

Degradation-Aware Metric Prompting (DAMP)

Updated 30 December 2025
  • DAMP is a framework that leverages continuous metrics to automatically generate prompt vectors, replacing traditional discrete cues in restoration tasks.
  • It integrates spatial-spectral adaptive modules with a Mixture-of-Experts approach to dynamically route neural operations based on real-time degradation assessments.
  • Empirical results demonstrate improved PSNR/SSIM performance over explicit prompting methods, enhancing restoration accuracy on both natural and remote-sensing images.

Degradation-Aware Metric Prompting (DAMP) describes a class of architectures and mechanisms that replace explicit degradation priors with continuous, interpretable metrics for guiding restoration models under diverse artifact conditions. DAMP operates by extracting multi-dimensional quantifications of degradation severity or perceptual quality directly from corrupted inputs, translating these metrics into prompt vectors that dynamically route and adapt neural operations. It has been instantiated for hyperspectral image (HSI) restoration (Wang et al., 23 Dec 2025) and generative diffusion sampling (Su et al., 17 Apr 2025), providing adaptive feature modulation and prompt conditioning that generalize to complex, mixed, or previously unseen degradations.

1. Rationale and Limitations of Prior Prompting in Restoration

Traditional unified restoration approaches for HSIs seek a model RθR_\theta that inverts non-unique transformations and noise, operating over all modalities of degradation (denoising, deblurring, inpainting, super-resolution, band-missing). They commonly employ explicit prompts—discrete labels (e.g., "denoise," blur kernel size) or text annotations—to guide the restoration process. However, real-world degradations are continuous and entangled, making such prompts impractical. Explicit cues fail to encode severity gradations or nuanced cross-task similarities, and discrete branches omit latent shared correlations—such as the overlapping frequency loss from both blur and noise. DAMP is motivated by these constraints, moving away from manually defined or one-hot indicators towards latent, automatically inferred metric vectors that succinctly describe the degradation observed in the input (Wang et al., 23 Dec 2025).

2. Degradation Metric Extraction and Prompt Construction

DAMP formalizes a "Degradation Prompt" (DP) as a compact vector formed by multi-dimensional metrics quantifying spatial and spectral degradation:

  • High-Frequency Energy Ratio (HFER): Averages per-channel energy in high spatial frequencies, calculated as

HFER=1Cc=1C(u,v)ΩHF[xc(u,v)]2(u,v)F[xc(u,v)]2\text{HFER} = \frac{1}{C} \sum_{c=1}^{C} \frac{ \sum_{(u,v)\in\Omega_H} |\mathcal{F}[x_c(u,v)]|^2 }{ \sum_{(u,v)} |\mathcal{F}[x_c(u,v)]|^2 }

Lower HFER signals lost spatial detail.

  • Spatial Texture Uniformity (STU): The geometric-arithmetic mean ratio over the Fourier spectrum, measuring noise-induced texture flattening.
  • Spectral Curvature Mean (SCM): Discrete Laplacian of spectral bands, identifying smoothing versus artifact.

Three additional metrics—spectral curvature standard deviation (SCSD), gradient magnitude standard deviation (GSD), local spatial correlation coefficient (SCC)—complete the DP:

e=[HFER,STU,SCM,SCSD,GSD,SCC]R6e = [\text{HFER}, \text{STU}, \text{SCM}, \text{SCSD}, \text{GSD}, \text{SCC}] \in \mathbb{R}^6

For perceptual restoration in AdaQual-Diff (Su et al., 17 Apr 2025), a fine-grained non-reference quality map Q(y)[1,5]H×WQ(\mathbf{y}) \in [1,5]^{H \times W} is computed using DeQAScore, where

Qi,j=Espi,j(s)[s]=15spi,j(s)dsQ_{i,j} = \mathbb{E}_{s \sim p_{i,j}(s)}[s] = \int_1^5 s \, p_{i,j}(s) ds

with pi,jp_{i,j} denoting the distribution over patchwise scores at each pixel. Regional mean quality qrq_r allows mapping to local prompt complexity:

Cp(qr)=Cmin+(CmaxCmin)(1q^r)C_p(q_r) = C_{\min} + (C_{\max}-C_{\min}) (1 - \widehat{q}_r)

where q^r=qrqminqmaxqmin\widehat{q}_r = \frac{q_r-q_{\min}}{q_{\max}-q_{\min}}.

3. Adaptive Architectural Modules: SSAM and MoE Integration

Spatial-Spectral Adaptive Modules (SSAMs) serve as specialized "experts" within a Mixture-of-Experts (MoE) topology. Each SSAM ii processes input features FRH×W×DF\in\mathbb{R}^{H'\times W'\times D} as:

  • Spatial branch: Fs=Es(F)F_s = E_s(F) (e.g., transformer)
  • Spectral branch: Fc=Ec(F)F_c = E_c(F) (e.g., 1D convolution)
  • Fusion: Fexpert(i)=λs(i)Fs+λc(i)FcF_\text{expert}^{(i)} = \lambda_s^{(i)} F_s + \lambda_c^{(i)} F_c, subject to λs(i)+λc(i)=1,λs(i),λc(i)0\lambda_s^{(i)} + \lambda_c^{(i)} = 1, \lambda_s^{(i)}, \lambda_c^{(i)} \geq 0.

Degradation-adaptive MoE layers (DAMoE) gate the SSAMs using the DP vector. Routing probabilities gg are computed via a joint projection of shallow features xx and DP ee, with softmax activation and top-kk selection:

g=Tk(softmax(fproj(x,e)+ϵ)),  g0=kg = T_k\left(\text{softmax}(f_\text{proj}(x, e) + \epsilon)\right),~~\|g\|_0 = k

The final restoration feature is aggregated and fused as y=Fuse([fshared,fdeg])y = \text{Fuse}([f_\text{shared}, f_\text{deg}]).

In AdaQual-Diff, region-dependent guidance fields {(r,Pr)}\{(r, \mathcal{P}_r)\} are synthesized by partitioning the image by local quality, adaptively selecting prompt pools and prompt counts per region, and feeding these into cross-attention layers during diffusion updates.

4. Training Objectives and Loss Formulations

DAMP is trained end-to-end with an L1 pixel loss over all restoration tasks:

L(θ)=E(Y,X)[Rθ(Y)X1]\mathcal{L}(\theta) = \mathbb{E}_{(\mathbf{Y}, \mathbf{X})} \bigl[ \|R_\theta(\mathbf{Y}) - \mathbf{X}\|_1 \bigr]

No explicit supervision is imposed on DP extraction; metrics are fully differentiable functions of the observed input and are fixed during the learning process (Wang et al., 23 Dec 2025).

For AdaQual-Diff, the objective includes quality-weighted noise loss and perceptual loss:

  • λ1=0.5\lambda_1=0.5 (noise loss)
  • λ2=0.1\lambda_2=0.1 (perceptual loss) Quality maps from DeQAScore are cached to avoid computational overhead during inference.

5. Experimental Protocols, Performance, and Component Ablations

DAMP has been evaluated on both natural-scene (ARAD, ICVL, CAVE) and remote-sensing HSI datasets (Xiong’an, Chikusei, PaviaC, PaviaU, HyRank). Degradation scenarios include Gaussian noise, Gaussian blur, super-resolution (bicubic scaling), inpainting (70–90% mask), and spectral band completion (10–30% band dropout). Zero-shot tests comprise untrained-for motion blur and Poisson noise.

Performance is measured with PSNR and SSIM. DAMP achieves \sim51.97 dB PSNR/0.990 SSIM on combined natural and RS tasks (prior bests: 51.40 dB for MoCE-IR, 50.69 dB for MP-HSIR). For motion deblur it records 31.05 dB/0.899 (vs. PromptIR 30.53 dB/0.881); for Poisson denoise, 24.08 dB (vs. 21.98 dB).

Ablation analysis quantifies the impact:

  • Baseline (no DP, no SSAM): 45.82 dB/0.976
  • +DP only: 50.02 dB/0.986
  • +DP+SSAM (full): 51.43 dB/0.989 Routing by DP outperforms frequency-based (47.72 dB) and one-hot type (46.27 dB) selection. Plug-and-play DP routing boosts established MoCE-IR by +1.00 dB; MP-HSIR by +1.99 dB. Top-1 routing with 4 experts is optimal; activating multiple experts simultaneously degrades results.

In AdaQual-Diff, adaptive prompt complexity yields higher PSNR/SSIM on CDD-11 compared with fixed-length prompting.

Model/Setting PSNR SSIM
Fixed C=10C=10 29.33 0.8845
Fixed C=30C=30 29.87 0.8934
AdaQual-Diff 30.11 0.9001

Varying prompt pool threshold τ\tau modulates restoration quality (chosen τ=3.0\tau=3.0 is optimal for CDD-11).

6. Mechanisms for Guidance Injection and Resource Scheduling

In restoration diffusion models, DAMP guides conditional sampling by embedding prompt sets derived from regional quality metrics Q(y)Q(\mathbf{y}). At each step, the noise-prediction ϵθ\epsilon_\theta is conditioned not only on the input and timestep, but also on P(Q(y))\mathcal{P}(Q(\mathbf{y})):

ϵθ(xt,t,y,P(Q(y)))\epsilon_\theta\bigl(\mathbf x_t, t, \mathbf y, \mathcal{P}(Q(\mathbf y))\bigr)

which is injected via cross-attention mechanisms in the backbone architecture. The guidance complexity CpC_p modulates the amount and structure of prompts across regions, dynamically allocating computation (e.g., longer attention sequences for low-quality regions).

The computational overhead is minimal: AdaQual-Diff runs in only 2 sampling steps with throughput ∼58 FPS, incurring just ∼1 ms penalty over regression baselines.

7. Theoretical Justification, Generalization, and Empirical Findings

The principal theoretical assertion underlying DAMP is that restoration prompt complexity should scale inversely with perceptual quality, i.e., Cpf(1Q)C_p \propto f(1-Q). This approach enables models to devote computation preferentially to regions or instances most in need of correction, thereby enhancing both targeted fidelity and overall throughput.

Empirical studies support several claims:

  • Random Forest classifiers on the DP vector separate canonical degradations with >98% accuracy and can encode severity continuously.
  • DP-based routing outperforms traditional strategies in multi-task generalization and zero-shot settings.
  • Adaptive quality-prompting in AdaQual-Diff improves low-quality patch restoration without degrading clean regions; quality-weighted loss further accelerates artifact reduction.
  • DAMP generalizes strongly to unseen or complex mixed degradations and improves over previous explicit-prompt methods in both restoration accuracy and resource allocation efficiency (Wang et al., 23 Dec 2025, Su et al., 17 Apr 2025).

A plausible implication is that metric-based prompting frameworks can be generalized beyond HSI and image restoration, informing architectural choices in other domains where artifact mixtures or unknown task types arise. The cost–benefit equilibrium established in DAMP—minimal added overhead for significant gains in adaptivity and generalization—suggests its utility in expanded restoration and enhancement pipelines.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Degradation-Aware Metric Prompting (DAMP).