Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wake Word Detection with Alignment-Free Lattice-Free MMI (2005.08347v3)

Published 17 May 2020 in eess.AS, cs.CL, and cs.SD

Abstract: Always-on spoken language interfaces, e.g. personal digital assistants, rely on a wake word to start processing spoken input. We present novel methods to train a hybrid DNN/HMM wake word detection system from partially labeled training data, and to use it in on-line applications: (i) we remove the prerequisite of frame-level alignments in the LF-MMI training algorithm, permitting the use of un-transcribed training examples that are annotated only for the presence/absence of the wake word; (ii) we show that the classical keyword/filler model must be supplemented with an explicit non-speech (silence) model for good performance; (iii) we present an FST-based decoder to perform online detection. We evaluate our methods on two real data sets, showing 50%--90% reduction in false rejection rates at pre-specified false alarm rates over the best previously published figures, and re-validate them on a third (large) data set.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yiming Wang (141 papers)
  2. Hang Lv (15 papers)
  3. Daniel Povey (45 papers)
  4. Lei Xie (337 papers)
  5. Sanjeev Khudanpur (74 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.