Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

There is more than one kind of robustness: Fooling Whisper with adversarial examples (2210.17316v2)

Published 26 Oct 2022 in eess.AS, cs.AI, cs.CL, cs.LG, and cs.SD

Abstract: Whisper is a recent Automatic Speech Recognition (ASR) model displaying impressive robustness to both out-of-distribution inputs and random noise. In this work, we show that this robustness does not carry over to adversarial noise. We show that we can degrade Whisper performance dramatically, or even transcribe a target sentence of our choice, by generating very small input perturbations with Signal Noise Ratio of 35-45dB. We also show that by fooling the Whisper language detector we can very easily degrade the performance of multilingual models. These vulnerabilities of a widely popular open-source model have practical security implications and emphasize the need for adversarially robust ASR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Raphael Olivier (10 papers)
  2. Bhiksha Raj (180 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.