Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The IBM 2016 English Conversational Telephone Speech Recognition System (1604.08242v2)

Published 27 Apr 2016 in cs.CL

Abstract: We describe a collection of acoustic and LLMing techniques that lowered the word error rate of our English conversational telephone LVCSR system to a record 6.6% on the Switchboard subset of the Hub5 2000 evaluation testset. On the acoustic side, we use a score fusion of three strong models: recurrent nets with maxout activations, very deep convolutional nets with 3x3 kernels, and bidirectional long short-term memory nets which operate on FMLLR and i-vector features. On the LLMing side, we use an updated model "M" and hierarchical neural network LMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. George Saon (39 papers)
  2. Tom Sercu (17 papers)
  3. Steven Rennie (6 papers)
  4. Hong-Kwang J. Kuo (11 papers)
Citations (108)

Summary

We haven't generated a summary for this paper yet.