Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Targeted Adversarial Examples for Black Box Audio Systems (1805.07820v2)

Published 20 May 2018 in cs.LG, cs.CR, cs.SD, eess.AS, and stat.ML

Abstract: The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity after 3000 generations while maintaining 94.6% audio file similarity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rohan Taori (14 papers)
  2. Amog Kamsetty (5 papers)
  3. Brenton Chu (1 paper)
  4. Nikita Vemuri (3 papers)
Citations (179)

Summary

A Review of Targeted Adversarial Examples for Black Box Audio Systems

The paper "Targeted Adversarial Examples for Black Box Audio Systems" addresses the challenge of creating adversarial examples for automatic speech recognition (ASR) systems under the constraints of a black box setting. It builds upon the vulnerabilities of deep neural networks to adversarial perturbations by focusing on audio transcription models, notably adopting a black box approach to adversarial example generation. The authors employ a combination of genetic algorithms and gradient estimation techniques, achieving a 35% targeted attack success rate with a noteworthy audio file similarity of 94.6%.

Key Findings and Methodology

ASR systems, such as those used in personal assistants, rely heavily on deep recurrent networks, which have shown significant improvements due to advancements in neural network architectures. However, these systems remain susceptible to adversarial attacks. While most existing research is concentrated on white-box attacks, where model parameters and architecture are fully known, real-world applications often follow a more restricted, black-box scenario. In such instances, attackers may only access the system's outputs rather than its internal design or weights.

The paper sets out to create adversarial perturbations without direct model knowledge through a two-phase approach. Initially, genetic algorithms explore the domain space by mutating candidate audio samples and selecting the most promising ones based on a connectionist temporal classification (CTC) loss function. The genetic algorithm phase incorporates a novel momentum mutation update that adjusts mutation probabilities, fostering better mutation accumulation to overcome local optima. Subsequently, when adversarial examples approach their target, the method switches to gradient estimation, providing fine-tuned perturbations using sampled gradients—despite computationally expensive evaluations of audio data due to high sampling rates.

Analytical Results

The authors use the CommonVoice test set, aiming to convert benign audio samples into targeted adversarial inputs interpreted as specific phrases by ASR systems. With a 35% success rate over 3000 iterations, the algorithm highlights its effectiveness in the black box domain for speech-to-text, a task inherently more complex than similar image-based attacks. The method achieves 89.25% similarity between the adversarial decoding and the target text, demonstrating practical viability in fooling ASR systems while retaining audio fidelity. Moreover, the constrained attack surface—considering thousands of possible output texts—underscores the difficulty compared to image models with limited class outputs.

Implications and Future Directions

The findings illustrate the potential for creating subtle adversarial examples in audio domains, where human listeners cannot easily detect perturbations yet ASR systems misinterpret them. Practically, such attacks could have implications for security in voice-activated systems, possibly compelling further research into robust defenses against adversarial audio inputs.

For theoretical advancements in AI, these results affirm the relevance of genetic algorithms and gradient estimation in adversarial generation, particularly in complex and opaque model environments. The successful integration of mutation momentum and constrained perturbation placement strategies could inform broader applications across other black box machine learning challenges.

Future research might focus on increasing the robustness and efficiency of adversarial attacks while simultaneously exploring defenses. Enhanced techniques may lead to improved attack success rates and allow for more extensive practical testing across larger datasets and real-world systems. Additionally, cross-domain insights—leveraging approaches from audio adversarial attack strategies to other high-complexity models—could expand the versatility and reach of adversarial research in the AI field.