Papers
Topics
Authors
Recent
2000 character limit reached

Lifted PGD for 3D Asset Protection

Updated 15 December 2025
  • Lifted PGD is an adversarial optimization technique that secures 3D Gaussian Splatting assets by constructing strictly bounded perturbations, ensuring imperceptibility across multiple views.
  • It alternates between image-space PGD with gradient truncation and image-to-Gaussian fitting to lift perturbations into 3D parameters while maintaining an ℓ∞ invisibility constraint.
  • The method achieves robust, view-consistent protection against instruction-driven attacks, reducing editing effectiveness and paving the way for advanced 3D asset security.

Lifted Projected Gradient Descent (Lifted PGD, or L-PGD) is an adversarial optimization technique specifically designed to generate imperceptible but highly effective protection for 3D Gaussian Splatting (3DGS) assets against instruction-driven edits performed by diffusion-based pipelines. The method constructs a strictly bounded adversarial perturbation in the rendered image domain and then “lifts” this perturbation into additional 3D Gaussian parameters, ensuring protection generalizes across multiple views while maintaining invisibility under a controlled perceptual budget (Hong et al., 8 Dec 2025).

1. Global Objective and Problem Setting

Lifted PGD is formulated in the context of safeguarding 3DGS assets, denoted as Graw\mathcal{G}^{\rm raw}, by learning a modified model G\mathcal{G} that resists automatic instruction-driven editing. Given a set of training views Vt={vit}i=1Nt\mathcal{V}^t = \{v^t_i\}_{i=1}^{N_t} and a differentiable renderer R(G,v)\mathcal{R}(\mathcal{G}, v) producing an image from 3DGS G\mathcal{G} under camera view vv, the algorithm seeks:

Gprot=argminG  vtVt  Ladv(R(G,vt),y)\mathcal{G}^{\rm prot} = \arg\min_{\mathcal{G}} \;\sum_{v^t\in\mathcal{V}^t}\; \mathcal{L}_{\rm adv}\bigl(\,\mathcal{R}(\mathcal{G},v^t),\,y\bigr)

subject to: R(G,vt)R(Graw,vt)η,vt,\left\| \mathcal{R}(\mathcal{G},v^t) -\mathcal{R}(\mathcal{G}^{\rm raw},v^t) \right\|_\infty \le\eta, \quad \forall\,v^t, where Ladv\mathcal{L}_{\rm adv} is an adversarial-attack loss tailored to the editing pipeline and yy is the editing instruction. The \ell_\infty constraint with budget η\eta enforces strict invisibility (imperceptibility) of the protection on every training view (Hong et al., 8 Dec 2025).

2. Lifted PGD Algorithm: Alternating Image-Space and Asset-Space Updates

Lifted PGD addresses the challenge that the imperceptibility constraint is defined in the rendered, image-space domain, while the protection mechanism must ultimately be encoded in the 3DGS asset’s parameters. It alternates two core stages:

  • (A) Image-space Projected Gradient Descent (PGD) with Gradient Truncation
  • (B) Image-to-Gaussian Fitting (“Lifting” Perturbations into 3D)

Given the rendered image xk=R(Gk,vt)x^k = \mathcal{R}(\mathcal{G}^k, v^t) and the reference rendering x0=R(Graw,vt)x^0 = \mathcal{R}(\mathcal{G}^{\rm raw}, v^t):

2.A Image-space PGD with Gradient Truncation

  1. Adversarial gradient computation: The gradient of the adversarial loss is computed with respect to the rendered image xkx^k, truncated to its sign:

gxk=sign(xLadv(xk,y)),where each element{1,0,+1}g_x^k = \mathrm{sign}\left(\nabla_{x}\,\mathcal{L}_{\rm adv}(x^k,y)\right), \quad \text{where each element}\in \{-1,0,+1\}

  1. PGD step and \ell_\infty projection: The signed gradient update is applied with step size α\alpha, and the result is projected back onto the \ell_\infty-ball of radius η\eta around the reference image:

x~k+1=xkαgxk,xk+1=ΠB(x0,η)[x~k+1]\tilde x^{k+1} = x^k - \alpha\,g_x^k, \quad x^{k+1} = \Pi_{B_\infty(x^0,\eta)}[\tilde x^{k+1}]

where the projection is performed per element:

[ΠB(x0,η)(z)]i=min{xi0+η,  max{xi0η,zi}}\left[\Pi_{B_\infty(x^0,\eta)}(z)\right]_i = \min\left\{ x^0_i+\eta,\; \max\{ x^0_i-\eta,\,z_i\} \right\}

This step is repeated KpK_p times per outer iteration.

2.B Image-to-Gaussian Fitting

The strictly bounded perturbed image xk+1x^{k+1} is then treated as a photometric target:

  1. Fitting loss: A reconstruction loss Lrec(R(G,vt),xk+1)\mathcal{L}_{\rm rec}\left(\mathcal{R}(\mathcal{G}, v^t), x^{k+1}\right) (e.g., 2\ell_2 or SSIM) is minimized.
  2. Gradient descent on 3DGS parameters:

Gk+1=GkβGLrec(R(Gk,vt),xk+1)\mathcal{G}^{k+1} = \mathcal{G}^k - \beta \nabla_{\mathcal{G}} \mathcal{L}_{\rm rec}(\mathcal{R}(\mathcal{G}^k, v^t), x^{k+1})

The gradient is back-propagated from the renderer into each Gaussian’s mean, covariance, color, and opacity. This step is repeated KlK_l times.

The process alternates for EE outer iterations, sampling different training views to ensure generalization.

3. Pseudocode, Hyperparameters, and Convergence

A core loop for Lifted PGD involves, for each of EE outer iterations:

  • Sampling a view vtv^t
  • Rendering xkx^k and reference x0x^0
  • KpK_p steps of signed gradient PGD and projection in image space (to obtain xk+1x^{k+1})
  • KlK_l steps of image-to-Gaussian fitting via gradient descent (to obtain Gk+1\mathcal{G}^{k+1})

Relevant hyperparameters:

  • η\eta: \ell_\infty-budget for invisibility
  • α\alpha: image-space PGD step size
  • KpK_p: number of PGD substeps
  • β\beta: fitting step size
  • KlK_l: number of fitting substeps
  • EE: total outer iterations

Convergence can be determined either by the stabilization of the adversarial loss or by running for a fixed number of iterations. See Algorithm 1 in (Hong et al., 8 Dec 2025) for explicit pseudocode.

4. Significance of Alternating Scheme and View Generalization

Lifted PGD addresses two major challenges in adversarial protection for 3DGS (Hong et al., 8 Dec 2025):

  • View Generalizability: By optimizing across multiple views sampled from Vt\mathcal{V}^t, the approach induces view-consistent protective perturbations that remain effective from novel camera angles. This avoids view-specific overfitting that can occur with naïve 2D approaches.
  • Strict Invisibility: PGD updates and projections in the rendered image domain strictly enforce the \ell_\infty imperceptibility budget for every training view.

The innovation of “lifting” 2D adversarial perturbations into the 3DGS model parameters via image-to-Gaussian fitting ensures coherent, asset-aware propagation of perturbations, circumventing the spatial inconsistencies encountered in direct per-view 2D adversarial training.

5. The Role of Gradient Truncation and Safeguard Gaussians

A notable feature of Lifted PGD is the use of sign truncation for adversarial gradients. This discards uncontrolled gradient magnitudes, producing a directionally valid but magnitude-limited update that aids strict enforcement of the per-pixel invisibility constraint. Safeguard Gaussians, introduced as part of the 3DGS asset, encode these learned perturbations in a manner that balances invisibility and protection effectiveness.

The loop alternates between rendering-space adversarial optimization with bounded updates and asset-level fitting, ensuring that perturbations remain imperceptible (bounded \ell_\infty-distance to the raw rendering) while maximizing their adversarial impact (measured by attack-loss metrics such as CLIPd_d, CLIPs_s, and SAM-based metrics).

6. Empirical Effects and Protection Evaluation

Empirical results in (Hong et al., 8 Dec 2025) establish that the L-PGD-augmented AdLift scheme significantly reduces the effectiveness of state-of-the-art instruction-driven editing, both in 2D image and 3DGS settings. The produced protected assets degrade editing performance across all evaluated metrics and views while maintaining imperceptibility in rendered outputs. The methodology systematically avoids view-inconsistency issues (see Figure 1 of (Hong et al., 8 Dec 2025)) ubiquitous in direct 2D fitting approaches, due to its asset-centric lifting mechanism.

7. Broader Implications, Limitations, and Extensions

The L-PGD approach offers a principled mechanism for defending 3D content in open editing environments exposed to advanced generative pipelines. Its alternating, lifted nature suggests extensibility beyond 3DGS to other differentiable 3D representations subject to view-consistent constraints. A plausible implication is that such frameworks could form the basis for adversarial robustness tools in future content-authentication and anti-tampering systems within the 3D graphics and digital rights management (DRM) domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Lifted PGD.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube