Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks (2106.09249v1)

Published 17 Jun 2021 in cs.CR, cs.CV, and cs.LG

Abstract: In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite various prior studies on its security issues, all of them only consider attacks on camera- or LiDAR-based AD perception alone. However, production AD systems today predominantly adopt a Multi-Sensor Fusion (MSF) based design, which in principle can be more robust against these attacks under the assumption that not all fusion sources are (or can be) attacked at the same time. In this paper, we present the first study of security issues of MSF-based perception in AD systems. We directly challenge the basic MSF design assumption above by exploring the possibility of attacking all fusion sources simultaneously. This allows us for the first time to understand how much security guarantee MSF can fundamentally provide as a general defense strategy for AD perception. We formulate the attack as an optimization problem to generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it. We propose a novel attack pipeline that addresses two main design challenges: (1) non-differentiable target camera and LiDAR sensing systems, and (2) non-differentiable cell-level aggregated features popularly used in LiDAR-based AD perception. We evaluate our attack on MSF included in representative open-source industry-grade AD systems in real-world driving scenarios. Our results show that the attack achieves over 90% success rate across different object types and MSF. Our attack is also found stealthy, robust to victim positions, transferable across MSF algorithms, and physical-world realizable after being 3D-printed and captured by LiDAR and camera devices. To concretely assess the end-to-end safety impact, we further perform simulation evaluation and show that it can cause a 100% vehicle collision rate for an industry-grade AD system.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yulong Cao* (1 paper)
  2. Ningfei Wang* (1 paper)
  3. Chaowei Xiao* (1 paper)
  4. Dawei Yang* (1 paper)
  5. Jin Fang (23 papers)
  6. Ruigang Yang (68 papers)
  7. Qi Alfred Chen (37 papers)
  8. Mingyan Liu (70 papers)
  9. Bo Li (1107 papers)
Citations (188)

Summary

  • The paper introduces adversarial 3D objects that compromise both camera and LiDAR perception in multi-sensor fusion systems.
  • It employs an optimization-based attack method using differentiable 3D rendering and gradient techniques to simulate physical-world conditions.
  • Empirical evaluations demonstrate over 90% success rates in attack scenarios, highlighting critical challenges in securing autonomous vehicle perception.

Overview of Multi-Sensor Fusion Based Perception Security in Autonomous Driving

The paper "Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks" explores the vulnerabilities of Multi-Sensor Fusion (MSF) systems in autonomous vehicles (AVs) when exposed to adversarial physical-world attacks. This research is notable as it represents the first comprehensive examination of security threats specifically targeting MSF-based perception within AV systems. The paper boldly challenges the fundamental assumption that MSF can provide robust security due to the independent fusion of multiple sensor inputs, such as those from cameras and LiDAR, suggesting that both might be susceptible to simultaneous attacks.

Key Contributions and Findings

  1. Adversarial 3D Object Attack Vector: The research introduces adversarial 3D objects as a novel attack vector capable of compromising MSF-based perception by exploiting physical shape manipulation. These objects are designed to simultaneously impair both camera and LiDAR perception capabilities, thus invalidating the core MSF reliability assumption.
  2. Optimization-based Attack Methodology: The attack generation process is framed as an optimization problem, which systematically creates adversarial objects through gradient-based methods. The researchers ingeniously utilize differentiable 3D rendering techniques that simulate camera and LiDAR inputs, allowing for the generation of effective adversarial stimuli without necessitating extensive physical trials.
  3. Design Challenges and Solutions:
    • A major technical hurdle arises from the need to accurately synthesize adversarial impacts across non-differentiable components, such as camera images and LiDAR point clouds. The paper addresses this through a novel soft point-inclusion calculation, enabling differentiable cell-level feature aggregation for LiDAR data.
    • By leveraging Expectation over Transformation (EoT), the robustness of the attacks is improved against environmental variations, ensuring that the adversarial objects maintain their effectiveness under different conditions.
  4. Empirical Evaluation: The research evaluates the proposed attacks on industry-representative MSF algorithms, demonstrating remarkable success rates exceeding 90% across various scenarios and object types. Notably, experiments indicate that the adversarial objects remain effectively stealthy and realizable, maintaining high transferability and robustness in physical-world settings.
  5. End-to-end Implications: Through simulation, the attack’s capacity to induce 100% collision rates in AV systems is revealed, underscoring the potential safety risks posed by such adversarial techniques. This evaluation highlights the need for re-evaluating MSF systems’ defensive capabilities.
  6. Defense Strategies: Insights into potential defensive measures are discussed, including input transformation techniques and adversarial training. However, the paper suggests that these approaches only partially mitigate the threat, emphasizing the necessity for more advanced defensive tactics tailored to MSF-based perception vulnerabilities.

Implications and Future Work

The implications of this paper are profound for the field of autonomous driving, particularly pertaining to the security assurances of MSF systems. By exposing the susceptibility of these systems to adversarial attacks, the research invites reconsideration of MSF's reliability as a singular defense mechanism.

This work advocates enhancement in sensor fusion defenses, potentially by expanding multisensor inputs beyond traditional camera and LiDAR configurations, or employing novel robust algorithms that account for adversarial conditions in real time. Furthermore, the development of certified robustness frameworks that accommodate 3D physical attacks remains an open research frontier.

In summary, this paper underscores a critical dimension in the intersection of autonomous vehicles and cybersecurity, urging the rethink of defense paradigms to ensure the uncompromised safety of AV deployments.