Introduction
In the field of autonomous driving, the reliability of decision-making systems is paramount for safety. One key aspect of these systems is their ability to accurately recognize and respond to objects such as traffic signs. However, the resilience of these systems to adversarial attacks, where the input data is manipulated to deceive the system, remains a concern. This paper presents a novel approach to challenge the robustness of autonomous driving systems by introducing dynamic adversarial patches aimed at the object detection models that these systems rely on.
Attack Framework
The paper introduces a method to generate dynamic adversarial patches that mislead the object detection algorithms of autonomous vehicles. The patches are displayed on a screen mounted on another vehicle, and they adapt to environmental changes and the relative positions of the vehicles involved. A Screen Image Transformation Network (SIT-Net) is designed to account for environmental effects on the displayed images, helping bridge the gap between simulated and real-world scenarios. Adding to this, a positional loss term is included in the adversarial training process to improve the attack's chances of success. The significance of these manipulations can impact critical driving decisions, such as intersection crossing and lane changes.
Methodology
The paper employs gradient descent optimization to find patches that cause the misclassification of objects, like a non-restrictive traffic sign being incorrectly identified as a restrictive one. A specialized model called SIT-Net is introduced to adapt the patch to environmental conditions, and the patches themselves are optimized for different distances from the observer vehicle. By incorporating the positions of interacting cars and environmental factors, the attack strategy is more dynamic and reflects real-world driving scenarios.
Experimental Insights
The paper reports the first successful implementation of this kind of dynamic adversarial attack against real-world autonomous driving systems. Experiments demonstrate higher success rates for the attacks with the dynamic patches in many cases, especially when the attacking vehicle is closer to the targeted vehicle, which is particularly relevant in practical driving situations. Enhancements to the adversarial training framework are anticipated to inform future advancements in creating adamantine autonomous driving systems.
Through their research, the potential risks and vulnerabilities of deep neural network-based object detectors in autonomous vehicles are highlighted. This emphasizes the need for the development of robust autonomous driving systems that can withstand adversarial conditions and ensure the continuous safety of its operations.