Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving (1907.06826v2)

Published 16 Jul 2019 in cs.CR, cs.CV, eess.SP, and stat.ML

Abstract: In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yulong Cao (26 papers)
  2. Chaowei Xiao (110 papers)
  3. Benjamin Cyr (4 papers)
  4. Yimeng Zhou (2 papers)
  5. Won Park (6 papers)
  6. Sara Rampazzi (13 papers)
  7. Qi Alfred Chen (37 papers)
  8. Kevin Fu (13 papers)
  9. Z. Morley Mao (34 papers)
Citations (487)

Summary

Overview of "Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving"

The paper "Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving" explores the vulnerabilities of LiDAR systems used in autonomous vehicles through spoofing attacks. This paper represents the first comprehensive examination of the security of LiDAR-based perception systems in autonomous driving environments, contrasting prior research that has predominantly focused on camera-based perception.

Research Objective

The primary objective of the research is to investigate whether LiDAR spoofing can lead to semantically impactful consequences, such as the perception of fake obstacles by autonomous vehicles (AVs). These obstacles, if fabricated, could affect an AV's driving decisions, posing significant risks to road safety. Specifically, the paper aims to spoof obstacles near the front of an AV, prompting immediate adverse reactions, such as abrupt stopping.

Methodological Approach

The researchers employed a systematic approach to assess LiDAR vulnerability:

  1. Attack Reproduction and Blind Spoofing Experiments:
    • They reproduced existing LiDAR spoofing techniques to evaluate if these can effectively generate detectable obstacles through the LiDAR-based perception pipeline used in Baidu Apollo, a representative AV system.
    • Initial experiments indicated that blindly applying spoofing methods could not produce the desired semantic effects due to the sophisticated machine learning processes involved in object detection.
  2. Adversarial Example Generation:
    • To exploit potential vulnerabilities in machine learning-based object detection, the paper formulated an attack as an optimization problem.
    • The research developed a method leveraging global spatial transformations to model adversarial input perturbations effectively.
    • An algorithm was designed, combining optimization with global sampling, which substantially increased attack success rates to approximately 75%.
  3. Impact Scenarios:
    • Two specific scenarios were crafted to demonstrate the real-world implications of successful spoofing:
      • An emergency brake scenario where the vehicle halts abruptly, posing risks of passenger injury or collision.
      • An AV freezing scenario where a vehicle remains stationary at a green light, disrupting traffic flow.

Key Findings and Results

  • Success Rates:
    • The improved optimization approach significantly enhanced success rates for generating spoofed obstacles from a mere 18.9% to 43.3% when using a sampling-based optimization method.
    • A maximum success rate of around 75% was achieved with the strategic placement of 60 spoofed points.
  • Robustness:
    • The adversarial examples exhibited high robustness against variations in point cloud data and spoofed points, indicating potential real-world applicability.

Implications and Future Directions

The implications of this paper are extensive, highlighting critical vulnerabilities in LiDAR-based perception systems that could have severe safety and operational repercussions for autonomous vehicles. The research underscores the need for developing robust defense mechanisms at multiple levels:

  • AV System-Level Defenses:
    • Implementing filters to process LiDAR data more discerningly and minimizing ground reflections.
  • Sensor and Model-Level Defenses:
    • Improvements in LiDAR hardware to reduce susceptibility and advanced adversarial training techniques to fortify machine learning models against such attacks.

Future research could direct efforts toward live experimentation on-road to validate these findings and foster advancements in defense strategies, ensuring the security and safety of autonomous driving technologies.

In conclusion, while the paper presents compelling insights into the vulnerabilities associated with LiDAR systems, further exploration and development of countermeasures remain crucial in safeguarding the deployment of autonomous vehicle technologies.