DAC-JAX: A JAX Implementation of the Descript Audio Codec (2405.11554v1)
Abstract: We present an open-source implementation of the Descript Audio Codec (DAC) using Google's JAX ecosystem of Flax, Optax, Orbax, AUX, and CLU. Our codebase enables the reuse of model weights from the original PyTorch DAC, and we confirm that the two implementations produce equivalent token sequences and decoded audio if given the same input. We provide a training and fine-tuning script which supports device parallelism, although we have only verified it using brief training runs with a small dataset. Even with limited GPU memory, the original DAC can compress or decompress a long audio file by processing it as a sequence of overlapping "chunks." We implement this feature in JAX and benchmark the performance on two types of GPUs. On a consumer-grade GPU, DAC-JAX outperforms the original DAC for compression and decompression at all chunk sizes. However, on a high-performance, cluster-based GPU, DAC-JAX outperforms the original DAC for small chunk sizes but performs worse for large chunks.
- R. Kumar, P. Seetharaman, A. Luebs, I. Kumar, and K. Kumar, “High-fidelity audio compression with improved rvqgan,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- N. Zeghidour, A. Luebs, A. Omran, J. Skoglund, and M. Tagliasacchi, “SoundStream: An end-to-end neural audio codec,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 495–507, 2021.
- A. Défossez, J. Copet, G. Synnaeve, and Y. Adi, “High fidelity neural audio compression,” arXiv preprint arXiv:2210.13438, 2022.
- S.-W. Fu, K.-H. Hung, Y. Tsao, and Y.-C. F. Wang, “Self-supervised speech quality estimation and enhancement using only clean speech,” arXiv preprint arXiv:2402.16321, 2024.
- C. Wang et al., “Neural codec language models are zero-shot text to speech synthesizers,” arXiv preprint arXiv:2301.02111, 2023.
- A. Agostinelli et al., “MusicLM: Generating music from text,” arXiv preprint arXiv:2301.11325, 2023.
- J. Copet, F. Kreuk, I. Gat, T. Remez, D. Kant, G. Synnaeve, Y. Adi, and A. Défossez, “Simple and controllable music generation,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- H. F. Garcia, P. Seetharaman, R. Kumar, and B. Pardo, “VampNet: Music Generation via Masked Acoustic Token Modeling,” Jul. 2023, arXiv:2307.04686 [cs, eess]. [Online]. Available: http://arxiv.org/abs/2307.04686
- B. Kuznetsov, “SNAX,” https://github.com/boris-kuz/SNAX, 2024.
- H. Siuzdak, “SNAC: Multi-Scale Neural Audio Codec,” Feb. 2024. [Online]. Available: https://github.com/hubertsiuzdak/snac
- C. J. Steinmetz and J. D. Reiss, “pyloudnorm: A simple yet flexible loudness meter in python,” in 150th AES Convention, 2021.
- B. Kuznetsov, “jaxloudnorm,” https://github.com/boris-kuz/jaxloudnorm, 2024.
- Y. Orlarey, D. Fober, and S. Letz, “FAUST: an efficient functional approach to dsp programming,” New Computational Paradigms for Computer Music, pp. 65–96, 2009.
- D. Braun, “DawDreamer: Bridging the gap between digital audio workstations and python interfaces,” arXiv preprint arXiv:2111.09931, 2021.