Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Audio Adversarial Example for a Physical Attack (1810.11793v4)

Published 28 Oct 2018 in cs.LG, cs.CR, cs.SD, eess.AS, and stat.ML

Abstract: We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not able to perform such a physical attack because of reverberation and noise from playback environments. In contrast, our method obtains robust adversarial examples by simulating transformations caused by playback or recording in the physical world and incorporating the transformations into the generation process. Evaluation and a listening experiment demonstrated that our adversarial examples are able to attack without being noticed by humans. This result suggests that audio adversarial examples generated by the proposed method may become a real threat.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hiromu Yakura (19 papers)
  2. Jun Sakuma (46 papers)
Citations (185)

Summary

We haven't generated a summary for this paper yet.