Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

2-bit Conformer quantization for automatic speech recognition (2305.16619v1)

Published 26 May 2023 in eess.AS

Abstract: Large speech models are rapidly gaining traction in research community. As a result, model compression has become an important topic, so that these models can fit in memory and be served with reduced cost. Practical approaches for compressing automatic speech recognition (ASR) model use int8 or int4 weight quantization. In this study, we propose to develop 2-bit ASR models. We explore the impact of symmetric and asymmetric quantization combined with sub-channel quantization and clipping on both LibriSpeech dataset and large-scale training data. We obtain a lossless 2-bit Conformer model with 32% model size reduction when compared to state of the art 4-bit Conformer model for LibriSpeech. With the large-scale training data, we obtain a 2-bit Conformer model with over 40% model size reduction against the 4-bit version at the cost of 17% relative word error rate degradation

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Oleg Rybakov (15 papers)
  2. Phoenix Meadowlark (3 papers)
  3. Shaojin Ding (12 papers)
  4. David Qiu (12 papers)
  5. Jian Li (667 papers)
  6. David Rim (4 papers)
  7. Yanzhang He (41 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.