2000 character limit reached
Audio Adversarial Examples: Attacks Using Vocal Masks (2102.02417v2)
Published 4 Feb 2021 in cs.SD, cs.AI, and eess.AS
Abstract: We construct audio adversarial examples on automatic Speech-To-Text systems . Given any audio waveform, we produce an another by overlaying an audio vocal mask generated from the original audio. We apply our audio adversarial attack to five SOTA STT systems: DeepSpeech, Julius, Kaldi, wav2letter@anywhere and CMUSphinx. In addition, we engaged human annotators to transcribe the adversarial audio. Our experiments show that these adversarial examples fool State-Of-The-Art Speech-To-Text systems, yet humans are able to consistently pick out the speech. The feasibility of this attack introduces a new domain to study machine and human perception of speech.
- Kai Yuan Tay (1 paper)
- Lynnette Ng (3 papers)
- Wei Han Chua (1 paper)
- Lucerne Loke (1 paper)
- Danqi Ye (1 paper)
- Melissa Chua (1 paper)