Efficient And Scalable Neural Residual Waveform Coding With Collaborative Quantization (2002.05604v1)
Abstract: Scalability and efficiency are desired in neural speech codecs, which supports a wide range of bitrates for applications on various devices. We propose a collaborative quantization (CQ) scheme to jointly learn the codebook of LPC coefficients and the corresponding residuals. CQ does not simply shoehorn LPC to a neural network, but bridges the computational capacity of advanced neural network models and traditional, yet efficient and domain-specific digital signal processing methods in an integrated manner. We demonstrate that CQ achieves much higher quality than its predecessor at 9 kbps with even lower model complexity. We also show that CQ can scale up to 24 kbps where it outperforms AMR-WB and Opus. As a neural waveform codec, CQ models are with less than 1 million parameters, significantly less than many other generative models.
- Kai Zhen (18 papers)
- Mi Suk Lee (5 papers)
- Jongmo Sung (5 papers)
- Seungkwon Beack (8 papers)
- Minje Kim (53 papers)