Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 129 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Object Removal Attacks in Sensor Systems

Updated 24 September 2025
  • Object Removal Attacks (ORA) are targeted manipulations that erase objects from sensor data or model outputs to evade detection in vision, LiDAR, and multimodal systems.
  • ORA methods leverage techniques such as mask-guided inpainting, point cloud manipulation, and neural backdoors to achieve high attack success rates while challenging existing detection protocols.
  • Defensive strategies like sensor fusion, geometric fingerprinting, and robust training are under exploration, yet challenges in generalizability, scalability, and ethical oversight remain.

Object Removal Attacks (ORA) are targeted manipulations that eliminate detectable evidence of real objects from sensor data or model outputs, often with the intent to evade, subvert, or challenge automated perception in vision, multimodal, or 3D-sensing systems. These attacks span both digital and physical modalities—including adversarially crafted images, signal perturbations in LiDAR point clouds, video inpainting, and multi-modal foundation model manipulation—and have emerged as a potent threat across autonomous driving, digital forensics, surveillance, and information hiding.

1. Foundational Approaches and Modalities

ORA research has been driven by advances in object masking, generative image modeling, signal-level point cloud manipulation, backdooring of neural networks, and adversarial examples. The major categories include:

  • Image and Video Inpainting: Early and recent methods perform spatial or spatio-temporal completion after targeted object removal using deep generative models, often guided by explicit object masks. VORNet (Chang et al., 2019) demonstrated this with a combination of optical flow-based temporal warping, single-frame CNN-based inpainting, and a temporal refinement stage.
  • LiDAR and 3D Sensor Attacks: Physical-world ORAs targeting autonomous vehicles leverage vulnerabilities in point cloud acquisition. These typically disturb or remove returns in regions of interest by point injection (e.g., via spoofed echoes behind the target in single-return LiDAR sensing (Hau et al., 2021)), timing-synchronized laser pulses (Cao et al., 2022), or passive mirror redirection of sensor beams (Yahia et al., 21 Sep 2025). The effect is removal of perceptual evidence before detection algorithms can process it.
  • Neural Backdoor Attacks: Clean-label backdoors in object detectors (e.g., "object disappearance attacks" (Cheng et al., 2023), "AnywhereDoor" (Lu et al., 9 Mar 2025)) poison models during training so that a trigger at inference forces the disappearance of specific or arbitrary objects from detection, with high attack success rates and negligible impact on clean mAP.
  • Steganography and Deep Hiding Removal: Attacks on deep hiding schemes (DS, ISGAN, UDH) target the vulnerability that secret object information is embedded locally and non-redundantly in stego images. Removal strategies, such as PEEL (Xiang et al., 2021) and EBRA (Liu et al., 2023), erase small spatial regions and inpaint them, eradicating hidden objects while preserving container quality.
  • Multimodal Model Attacks: HiPS attacks on vision-LLMs like CLIP (Daw et al., 16 Oct 2024) perturb input images such that, although the overall semantics remain, the target object is omitted from the model's output, achieving targeted object removal in generated captions or class probabilities.

2. Core Methodological Mechanisms

A unifying principle across ORA methods is the exploitation of system-level or architectural vulnerabilities to either erase objects physically, obfuscate them semantically, or induce the model to ignore their presence.

2.1 Mask-Guided Image and Video Completion

Modern inpainting-based ORAs accept a mask of the object and synthesize plausible scene content conditioned on it. Notable approaches include:

  • Optical Flow Warping + Image Inpainting: VORNet fuses warped background information from temporally aligned frames with generative inpainting, utilizing perception-aligned losses (spatially discounted L₁; VGG-based perceptual metrics), and adversarial discriminators to balance coherence and detail (Chang et al., 2019).
  • Diffusion Pathway Calibration: EraDiff (Liu et al., 10 Mar 2025) replaces standard denoising objectives with chain-rectifying optimization, generating dynamic mixup latent states along the diffusion chain—explicitly simulating object fading and ensuring object erasure by aligned transition pathways.
  • Object-Effect Attention and Fusion: Recent works such as ObjectClear (Zhao et al., 28 May 2025) separate and learn attention over both object and associated visual effects (shadows, reflections) using detailed supervision (e.g., OBER dataset), enabling precise, artifact-minimized removal and background preservation, with loss terms directly supervising attention to mask regions.
  • Mask Consistency Regularization (MCR): Training is augmented with perturbed/dilated masks and a consistency loss to suppress mask-shape bias and spurious hallucinations, producing more natural fills even under adversarially manipulated mask geometry (Yuan et al., 12 Sep 2025).

2.2 Physical and Adversarial Attacks on 3D Sensing

ORA in AVs exploits measurement or algorithmic priors:

  • LiDAR Single-Return Replacement: By injecting stronger false returns just beyond the legitimate object, the LiDAR's strongest-return logic is subverted, displacing object points and thus reducing recall and AP for the attacked RoI (Hau et al., 2021).
  • Laser Timing and Mirror-Based Spoofing: Physical attacks synchronize laser pulses to "erase" segments of the point cloud or employ mirrors to redirect beams away from the object, thereby hiding obstacles from occupancy grids and downstream planners (Cao et al., 2022, Yahia et al., 21 Sep 2025).

2.3 Neural Backdoors in Object Detection

Multi-target backdoor strategies (AnywhereDoor (Lu et al., 9 Mar 2025)) utilize:

  • Objective Disentanglement: Removal and generation targets are encoded as binary vectors, massively reducing the required trigger diversity and enabling arbitrary control at inference.
  • Trigger Mosaicking: Small trigger patches are mosaicked across the image, preserving attack efficacy even when detectors process local regions, and ensuring triggers survive cropping or sliding windows.
  • Strategic Batching: Poisoned data is adaptively sampled to counter dataset/class imbalance and object co-occurrence, increasing attack robustness and scalability.

A prototypical object disappearance attack ("clean-label" backdoor (Cheng et al., 2023)) trains models to associate a benign trigger pattern with background; at inference, placing the trigger over an object region suppresses detection confidence, making the object invisible to the model with >92% ASR on MSCOCO2017 (poison rate 5%).

2.4 Attacks on Deep Hiding and Steganography

PEEL (Xiang et al., 2021) and EBRA (Liu et al., 2023) exploit:

  • Locality/Low Redundancy: Erasure of secret-bearing regions (pixels/patches) accompanied by advanced inpainting guided by edge and color cues, ensuring that secret recovery via the original decoder becomes impossible, with container quality statistically and subjectively preserved—PSNR-S and VIF-S are reduced to noise-levels.

3. Evaluation Strategies and Empirical Results

ORA efficacy is empirically demonstrated using:

  • Image/Video Metrics: MSE, SSIM, LPIPS, PSNR/PSNR-BG, FID/Local-FID, user/LLM-based visual realism assessments.
  • Object Detector Metrics: Recall, AP/mAP, attack success rate (ASR), and retention/removal rates for defined triggers.
  • LiDAR/Occupancy Metrics: Point removal ratios (e.g., 92.7% of a target's points erased in PRA moving vehicle scenario (Cao et al., 2022)), confidence drop thresholds, and error propagation in perception pipelines (occupancy grid false negatives leading to unsafe planning).
  • Ablation Studies: Analyses of trigger size, poison rate, mask perturbation, and robustness to adaptive defense mechanisms elucidate essential attack design trade-offs.

Comparative tests consistently show ORA methods outperforming baselines (standard inpainting, blurring, traditional removal, or single-target backdoors) across spatial, temporal, and semantic removal criteria.

4. Defensive Countermeasures and Mitigation

Research on counter-ORA methods has explored:

  • Sensor Fusion: Combining data from LiDAR, radar, visible/thermal cameras (e.g., detecting inconsistency between LiDAR and camera readings to flag attacks (Cao et al., 2022, Yahia et al., 21 Sep 2025)).
  • Geometric and Fingerprinting Models: Detecting mirror artifacts or spoofed returns in LiDAR by analyzing light fingerprints (intensity, pulse width), detecting anomalous gaps in azimuth or shadow regions (Cao et al., 2022, Yahia et al., 21 Sep 2025). Thermal imaging may identify occluded real objects due to thermal reflection invariance but has limitations in resolution and environmental sensitivity.
  • Robust Training: Adversarial or context-aware training for neural models—e.g., mask consistency regularization (against adversarial mask biases (Yuan et al., 12 Sep 2025)), self-adversarial training of segmentation networks (SAC defense (Liu et al., 2021)), or prompt-based detection of trigger-induced semantics.
  • Ensemble and Consistency Detection: Taxonomy-aware post-processing and ensemble cross-validation for multimodal attacks (HiPS (Daw et al., 16 Oct 2024)).

5. Applications, Impact, and Ethical Considerations

ORA methods underpin both benign and adversarial use cases:

  • Privacy and Redaction: Removing objects for privacy-aware visual data (anonymization, sensitive content removal).
  • Adversarial Threats: Attacks on AVs, security systems, and content authenticity pipelines—masking obstacles, erasing individuals or evidence, misdirecting automated decision-making.
  • Steganography and Deep Hiding: Provable erasure of secrets embedded in high-capacity neural hiding systems, prompting redesign of robust steganographic protocols.
  • Forgery and Digital Manipulation: Toolkits enabling highly convincing tampering of photographic or video evidence with minimal detectable residue.

A central ethical concern is that technical advancements in object removal, especially with improved background reconstruction and artifact suppression, may accelerate misinformation and digital forgery. These risks motivate continued research in detection, forensic watermarking, and robust multisensor authentication.

6. Key Open Challenges and Future Directions

  • Generalizability: Robustness against novel mask geometries, attack triggers, or adaptive adversaries; cross-domain generalization (e.g., anime, medical, night scenes).
  • Efficient, Scalable Defenses: Achieving high recall for attacks without excessive false positives or resource demands, especially in real-time safety-critical systems.
  • Dataset and Benchmark Evolution: Paired datasets capturing complex object-effect relationships (e.g., shadows, transparent objects, composite effects, as in OBER (Zhao et al., 28 May 2025) and Video4Removal (Wei et al., 13 Jan 2025)) are central to training and evaluating next-generation object removal models.
  • Human and Legal Oversight: Defining clear boundaries and detection/response protocols for ORA in legal, ethical, and mission-critical settings.

In summary, Object Removal Attacks encompass a rich interplay of mask guidance, adversarial design, sensor-level manipulation, and generative modeling. They represent a salient adversarial frontier across both the physical and digital safety-security landscape, with ongoing innovation in attack fidelity, generalization, and cross-modal stealth stimulating parallel advances in defense and forensic analysis.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Object Removal Attacks (ORA).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube