Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attacking Optical Flow (1910.10053v1)

Published 22 Oct 2019 in cs.CV, cs.LG, and eess.IV

Abstract: Deep neural nets achieve state-of-the-art performance on the problem of optical flow estimation. Since optical flow is used in several safety-critical applications like self-driving cars, it is important to gain insights into the robustness of those techniques. Recently, it has been shown that adversarial attacks easily fool deep neural networks to misclassify objects. The robustness of optical flow networks to adversarial attacks, however, has not been studied so far. In this paper, we extend adversarial patch attacks to optical flow networks and show that such attacks can compromise their performance. We show that corrupting a small patch of less than 1% of the image size can significantly affect optical flow estimates. Our attacks lead to noisy flow estimates that extend significantly beyond the region of the attack, in many cases even completely erasing the motion of objects in the scene. While networks using an encoder-decoder architecture are very sensitive to these attacks, we found that networks using a spatial pyramid architecture are less affected. We analyse the success and failure of attacking both architectures by visualizing their feature maps and comparing them to classical optical flow techniques which are robust to these attacks. We also demonstrate that such attacks are practical by placing a printed pattern into real scenes.

Citations (86)

Summary

  • The paper demonstrates that adversarial patch attacks, covering less than 1% of an image, significantly degrade optical flow network performance.
  • The study reveals that encoder-decoder architectures are more susceptible to these attacks than spatial pyramid networks through empirical Zero-Flow tests.
  • The findings underscore the need for enhanced architectural designs and training methods to improve the safety of optical flow systems in critical applications.

Assessing the Vulnerability of Optical Flow Networks to Adversarial Patch Attacks

The paper "Attacking Optical Flow" explores the robustness of deep neural networks against adversarial patch attacks in the context of optical flow estimation. Optical flow, an essential component in various applications such as autonomous driving and video analysis, benefits significantly from recent advances in deep learning. However, the vulnerability of these deep learning models to adversarial attacks raises concerns, especially in safety-critical scenarios.

Adversarial Patch Attacks on Optical Flow Networks

The authors extend adversarial patch attacks, commonly applied in image classification, to optical flow networks. Adversarial patch attacks involve embedding a small engineered patch within an image to induce significant errors in a model's output, even when the model's input is only minimally perturbed. This paper demonstrates that patch attacks can notably degrade the performance of optical flow networks. The applied patch, although covering less than 1\% of the image, leads to errors extending beyond the patched region, thereby deteriorating the motion estimation across a substantial portion of the image.

Sensitivity of Network Architectures

Through empirical evaluation, the paper identifies that encoder-decoder architectures such as FlowNetC and FlowNet2 are more susceptible to these attacks than spatial pyramid architectures, like SpyNet and PWC-Net. Classical optical flow methods, e.g., LDOF and EpicFlow, exhibit greater robustness against adversarial patches than deep networks. This disparity in susceptibility points to the architectures' inherent differences; the spatial pyramid technique's hierarchical processing renders it more resilient to localized disturbances.

Zero-Flow Tests and Feature Map Analysis

By introducing the Zero-Flow test, the authors probe deeper into the networks' architectures to understand their vulnerabilities. This test involves inputting identical images to the network, where no motion is expected. The evidence from these tests reveals that feature maps in flow networks lack spatial invariance, with encoded features showing significant variance even without motion, suggesting that current deep learning interiors are not adequately handling spatial perturbations. Consequently, this leads to amplifications and artifacts in feature maps, particularly in networks employing deconvolutions, which also suffer from checkerboard artifacts, a known issue in convolutional networks.

Practical Implications and Future Research Directions

The authors illustrate the practicality of these adversarial attacks through real-world demonstrations, showing substantial degradation of flow predictions when using physical patches in realistic settings. Such results underline the urgent need for methodologies capable of mitigating these vulnerabilities in optical flow networks. Enhancements in architectural design, perhaps through integrating more robust features from classical optical flow methods or devising training regimens resilient to adversarial conditions, might provide pathways to more robust flow estimation systems. Future work could focus on standardizing adversarial robustness as a key metric during the development of optical flow networks, ensuring application safety in situations like autonomous vehicle navigation.

Overall, this paper contributes significantly to our understanding of deep neural networks in optical flow estimation by highlighting their adversarial vulnerabilities. It sets a precedent for ongoing research aimed at fortifying deep learning models against adversarial threats, thereby paving the path towards more resilient and safer AI systems in critical applications.

Youtube Logo Streamline Icon: https://streamlinehq.com