Papers
Topics
Authors
Recent
Search
2000 character limit reached

Black-Box Point-Cloud Injection

Updated 18 February 2026
  • Black-box point-cloud injection is the process of adding imperceptible adversarial perturbations to 3D point clouds to fool neural networks without access to internal model details.
  • Techniques such as transfer-based optimization, spectral-domain attacks, and query-based methods enable the creation of highly transferable adversarial examples with minimal geometric distortion.
  • This approach is critical in safety-sensitive applications like autonomous driving and robotics, highlighting the urgent need for robust defenses against adversarial threats.

Black-box point-cloud injection refers to the process of generating and inserting adversarial perturbations or points into 3D point clouds with the goal of deceiving deep neural networks, under a threat model where the attacker has no access to the internal weights, gradients, or even probabilities/logits of the target model. Instead, these attacks rely on transferability, surrogate models, query-based finite-difference optimization, or data-distribution-guided methods to craft adversarial examples that are effective across arbitrary and unknown architectures. This topic has gained prominence due to the proliferation of deep 3D learning in safety-critical domains such as autonomous driving, robotics, and LiDAR-based perception, where the real-world deployment constraints typically admit only black-box access to deployed models.

1. Threat Models and Objectives in Black-Box Point-Cloud Injection

The black-box setting encompasses a range of adversary capabilities:

  • Score-based: Only logits or class probabilities are available;
  • Hard-label: Only the class label—or top-k labels—are accessible per query;
  • No-box/Zero-query: No access to the deployed model during attack crafting; all optimization is performed offline, without any queries or surrogate gradient information.

Formally, given an original point cloud XRn×3X\in\mathbb{R}^{n\times3}, the attacker seeks to produce an adversarial point cloud XadvX_\text{adv} such that the classifier's prediction changes—either in an untargeted (any incorrect label) or targeted (specific label yty_t) fashion—while maintaining imperceptibility constraints (e.g., DChamfer(X,Xadv)ϵD_\text{Chamfer}(X,X_\text{adv})\leq\epsilon). For injections, this often takes the form Xadv=XΔX_\text{adv} = X \oplus \Delta, where Δ\Delta is a small set of newly injected points or perturbations (Liu et al., 2019).

Transfer-based attacks generate adversarial clouds on a white-box surrogate and deploy them against a black-box victim, relying on cross-architecture transferability of adversarial directions (Hu et al., 2024, Pang et al., 21 Aug 2025, Liu et al., 2021). Query-based methods rely on finite-difference or decision-based optimization (for hard-label settings). Data-distribution-based methods, such as optimal transport, generate adversarial samples by exploring the intrinsic geometry of the data manifold, without reference to any classifier at all (Li et al., 27 Feb 2025).

2. Technical Methodologies for Black-Box Point-Cloud Injection

Techniques developed for black-box point-cloud injection attacks span several algorithmic paradigms:

  • Transfer-based Optimization: Classical approaches use projected gradient descent or fast gradient sign methods (FGSM) to generate adversarial perturbations on a surrogate model, then transfer these to the black-box model. While standard transfer rates in early work were 15–35% (Liu et al., 2019), newer schemes such as spectral-aware admix (SAAO) (Hu et al., 2024), critical feature guidance (CFG) (Pang et al., 21 Aug 2025), and feature-space transformation (Liu et al., 2021) achieve significantly improved attack success rates and enhanced imperceptibility.
  • Spectral-Domain and Diffusion Methods: Recent strategies generate perturbations in the graph spectral domain by performing graph Fourier transforms (GFT) on point clouds, admixing or fusing spectral features from different sources, and then optimizing in this space (Hu et al., 2024, Tao et al., 2023). Latent diffusion models further recast adversarial generation as reverse diffusion, using latent codes of other classes to guide samples toward adversarial regions, enabling query-free, highly transferable, and imperceptible injections (Zhao et al., 25 Jul 2025).
  • Decision Boundary and Hard-label Attacks: When only hard labels are available, methods like 3DHacker (Tao et al., 2023) construct intermediate spectral fusions, use binary search to project onto the decision boundary, and then iteratively refine the adversarial point cloud along coordinate and spectrum directions using query-based Monte-Carlo techniques.
  • Data Manifold and No-box Approaches: NoPain (Li et al., 27 Feb 2025) eschews any classifier-specific optimization, instead leveraging semi-discrete optimal transport (SDOT) to identify singular boundaries within the latent data manifold, and samples along these ridges to generate adversarial codes that, when decoded, yield highly transferable adversarial clouds without querying or overfitting to any specific architecture.
  • Sparse and Task-aware Attack Optimization: For LiDAR tracking, Target-aware Perturbation Generation (TAPG) restricts perturbations to target-relevant points within object bounding boxes and employs random sub-vector factorization to promote transferability across different trackers (Tian et al., 2024).

3. Spectral-Aware and Feature-Guided Black-Box Attacks

Recent advances have demonstrated that attacks crafted in the spectral (graph Fourier) or shared-feature domain boost transferability. The SAAO approach (Hu et al., 2024) applies GFT to project point clouds into the spectral domain, performing fusion or admixing of adversarial directions in a manner that preserves geometric plausibility. The CFG method (Pang et al., 21 Aug 2025) identifies critical features/regions shared across architectures via gradient- or attention-based saliency, directly targeting these to prioritize attacks on high-saliency points, yielding average attack success rates (ASR) up to 53.1% (ModelNet40) and 68.1% (PointConv), surpassing prior SOTA by 10–20 percentage points. These methods enforce imperceptibility via joint LL_\infty and Chamfer distance constraints and empirically demonstrate high resilience to common preprocessing defenses.

4. Diffusion- and Manifold-Guided No-Query Attacks

Diffusion models (Zhao et al., 25 Jul 2025) for adversarial point-cloud injection utilize VAE-style latent spaces and reverse-diffusion denoising to interpolate between classes or inject noise minimally, yielding adversarial samples that simultaneously fool diverse models (with ASR up to 90%) and remain imperceptible under stringent Chamfer and Hausdorff metrics. NoPain (Li et al., 27 Feb 2025) frames the problem as optimal transport in latent space, computing Brenier potentials and sampling along the manifold’s singular boundaries—which correspond to label transitions—so adversarial samples are inherently near the classifier's decision boundary across networks. NoPain achieves 72–100% transfer ASR with extremely low geometric distortion (CD ≈ 2×10⁻³) and near-instantaneous generation, with no queries or surrogate overfitting.

Method Attack Success Rate (ASR) Model Query Requirement Distortion Constraint (CD)
SAAO (Hu et al., 2024) / CFG (Pang et al., 21 Aug 2025) 53–68% None (transfer-based) 1–3×10⁻²
NoPain (Li et al., 27 Feb 2025), Diffusion (Zhao et al., 25 Jul 2025) 72–100% Zero (no-box/generative) 2–3×10⁻³
TAPG (Tian et al., 2024) (LiDAR Tracking) Comparable to white-box Query-efficient; transfer HD/CD on par with FGSM, low
3DHacker (Tao et al., 2023) 100% (10k queries) Hard-label query model D_H ∼ 0.013

A plausible implication is that data manifold-based and spectral attacks offer superior transferability compared to conventional surrogate optimization, provided manifold estimation is accurate and class overlap exists.

5. Representative Applications and Impact

Black-box point-cloud injection has significant security implications for 3D vision in autonomous vehicles, robotics, and surveillance. Attacks have been demonstrated on classification (PointNet, DGCNN, PointConv, PCT), segmentation, and tracking (P2B, BAT, M2Track) networks, as well as on real-world datasets such as ModelNet40, ScanObjectNN, KITTI, and nuScenes (Tian et al., 2024, Pang et al., 21 Aug 2025). Transferability across architectures is consistently observed, particularly when targeting critical, semantically salient regions shared among networks (Pang et al., 21 Aug 2025).

Task-aware attacks, such as TAPG, demonstrate that sparsity and geometric localization (target-aware masking) preserve imperceptibility while achieving high ASR and transfer to unseen architectures in the tracking context (Tian et al., 2024).

6. Defenses, Limitations, and Open Challenges

The effectiveness of black-box point-cloud injection varies depending on both defense mechanisms and application scenario:

  • Statistical Outlier Removal (SOR), Salient-Point Truncation, and Up-sampling (e.g., DUP-Net) can substantially reduce ASR by removing points distant from the main cloud or with high saliency (Liu et al., 2019). However, methods like NoPain that operate along data-manifold singular ridges, or spectral attacks that retain plausible local geometry, are less affected (SOR induces only minor CD increase; ASR remains ≥80%) (Li et al., 27 Feb 2025).
  • Adversarial Retraining in either coordinate or feature space can halve the success rate of transfer and query-based attacks (Liu et al., 2021).
  • Certified Defenses (e.g., CCN) and strong denoising may mitigate, but not eliminate, successful injections (Pang et al., 21 Aug 2025).

Current limitations include focus on classification attacks, higher difficulty of targeted attacks in black-box transfer, and potential challenges in extending these methodologies to dense segmentation or object detection without degrading spatial label consistency (Pang et al., 21 Aug 2025). In high-security regimes, query budgets or label-only access can reduce but not preclude successful black-box attacks (e.g., 3DHacker) (Tao et al., 2023).

7. Future Directions

Advancing black-box point-cloud injection will likely involve:

  • Further exploration of data distribution and manifold geometry—potentially via more advanced generative or geometric learning frameworks;
  • Extension to dense tasks such as 3D object detection and segmentation, requiring perturbations that preserve spatial coherence;
  • Improved robust defenses tuned to spectral and manifold-injected attacks;
  • Analysis of transferability in dynamic, streaming (LiDAR) or multi-view 3D scenarios with adaptive adversarial strategies (Tian et al., 2024).

Emerging paradigms such as spectral-aware admix (Hu et al., 2024), diffusion-guided generation (Zhao et al., 25 Jul 2025), and optimal transport singular boundary sampling (Li et al., 27 Feb 2025) are converging toward highly efficient, stealthy, and transferable injection strategies that not only challenge the robustness of current 3D deep learning pipelines, but also offer new insights into the structure of adversarial vulnerability in non-Euclidean domains.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Black-Box Point-Cloud Injection.