Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Patch Attacks on Monocular Depth Estimation Networks (2010.03072v1)

Published 6 Oct 2020 in cs.CV, cs.CR, and eess.IV

Abstract: Thanks to the excellent learning capability of deep convolutional neural networks (CNN), monocular depth estimation using CNNs has achieved great success in recent years. However, depth estimation from a monocular image alone is essentially an ill-posed problem, and thus, it seems that this approach would have inherent vulnerabilities. To reveal this limitation, we propose a method of adversarial patch attack on monocular depth estimation. More specifically, we generate artificial patterns (adversarial patches) that can fool the target methods into estimating an incorrect depth for the regions where the patterns are placed. Our method can be implemented in the real world by physically placing the printed patterns in real scenes. We also analyze the behavior of monocular depth estimation under attacks by visualizing the activation levels of the intermediate layers and the regions potentially affected by the adversarial attack.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Koichiro Yamanaka (1 paper)
  2. Ryutaroh Matsumoto (56 papers)
  3. Keita Takahashi (8 papers)
  4. Toshiaki Fujii (9 papers)
Citations (33)

Summary

We haven't generated a summary for this paper yet.