Direction Specific Ambisonics Source Separation with End-To-End Deep Learning (2305.11727v2)
Abstract: Ambisonics is a scene-based spatial audio format that has several useful features compared to object-based formats, such as efficient whole scene rotation and versatility. However, it does not provide direct access to the individual source signals, so that these have to be separated from the mixture when required. Typically, this is done with linear spherical harmonics (SH) beamforming. In this paper, we explore deep-learning-based source separation on static Ambisonics mixtures. In contrast to most source separation approaches, which separate a fixed number of sources of specific sound types, we focus on separating arbitrary sound from specific directions. Specifically, we propose three operating modes that combine a source separation neural network with SH beamforming: refinement, implicit, and mixed mode. We show that a neural network can implicitly associate conditioning directions with the spatial information contained in the Ambisonics scene to extract specific sources. We evaluate the performance of the three proposed approaches and compare them to SH beamforming on musical mixtures generated with the musdb18 dataset, as well as with mixtures generated with the FUSS dataset for universal source separation, under both anechoic and room conditions. Results show that the proposed approaches offer improved separation performance and spatial selectivity compared to conventional SH beamforming.
- P. Guiraud, S. Hafezi, P. A. Naylor, A. H. Moore, J. Donley, V. Tourbabin, and T. Lunner, “An introduction to the speech enhancement for augmented reality (spear) challenge,” in 2022 International Workshop on Acoustic Signal Enhancement (IWAENC), 2022.
- J. Ahrens, H. Helmholz, D. L. Alon, and S. V. A. Gari, “Spherical Harmonics Decomposition of a Sound Field Based on Microphones Around the Circumference of a Human Head,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 2021.
- L. McCormack, A. Politis, R. Gonzalez, T. Lokki, and V. Pulkki, “Parametric ambisonic encoding of arbitrary microphone arrays,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 2062–2075, 2022.
- A. A. Nugraha, A. Liutkus, and E. Vincent, “Multichannel audio source separation with deep neural networks,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 9, pp. 1652–1664, 2016.
- A. Ozerov and C. Févotte, “Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 550–563, 2009.
- N. Epain and C. T. Jin, “Independent component analysis using spherical microphone arrays,” Acta Acustica united with Acustica, vol. 98, no. 1, pp. 91–102, 2012.
- M. Hafsati, N. Epain, R. Gribonval, and N. Bertin, “Sound source separation in the higher order ambisonics domain,” in 22nd DAFx, 2019.
- J. Nikunen and A. Politis, “Multichannel nmf for source separation with ambisonic signals,” in 2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC). IEEE, 2018, pp. 251–255.
- A. J. Muñoz-Montoro, J. J. Carabias-Orti, and P. Vera-Candeas, “Ambisonics domain singing voice separation combining deep neural network and direction aware multichannel nmf,” in 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2021.
- Y. Mitsufuji, N. Takamune, S. Koyama, and H. Saruwatari, “Multichannel blind source separation based on evanescent-region-aware non-negative tensor factorization in spherical harmonic domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 607–617, 2021.
- M. Guzik and K. Kowalczyk, “Wishart localization prior on spatial covariance matrix in ambisonic source separation using non-negative tensor factorization,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 446–450.
- Y. Mitsufuji, G. Fabbro, S. Uhlich, and F.-R. Stöter, “Music demixing challenge at ismir 2021,” arXiv e-prints, pp. arXiv–2108, 2021.
- M. Cobos, J. Ahrens, K. Kowalczyk, and A. Politis, “An overview of machine learning and other data-based methods for spatial audio capture, processing, and reproduction,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2022, no. 1, Dec. 2022. [Online]. Available: https://asmp-eurasipjournals.springeropen.com/articles/10.1186/s13636-022-00242-x
- A. Bosca, A. Guérin, L. Perotin, and S. Kitić, “Dilated u-net based approach for multichannel speech enhancement from first-order ambisonics recordings,” in 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021, pp. 216–220.
- T. Ochiai, M. Delcroix, R. Ikeshita, K. Kinoshita, T. Nakatani, and S. Araki, “Beam-TasNet: Time-domain Audio Separation Network Meets Frequency-domain Beamformer,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Barcelona, Spain: IEEE, May 2020, pp. 6384–6388. [Online]. Available: https://ieeexplore.ieee.org/document/9053575/
- T. Jenrungrot, V. Jayaram, S. Seitz, and I. Kemelmacher-Shlizerman, “The cone of silence: Speech separation by localization,” Adv Neural Inf Process Syst, vol. 33, pp. 20 925–20 938, 2020.
- “Online listening examples,” http://research.spa.aalto.fi/publications/papers/acta22-sss/.
- E. Vincent, H. Sawada, P. Bofill, S. Makino, and J. P. Rosca, “First stereo audio source separation evaluation campaign: data, algorithms and results,” in International Conference on Independent Component Analysis and Signal Separation. Springer, 2007, pp. 552–559.
- F. Lluís, N. Meyer-Kahlen, V. Chatziioannou, and A. Hofmann, “A Deep Learning Approach for Angle Specific Source Separation from Raw Ambisonics Signals,” in DAGA, 2022.
- A. Défossez, N. Usunier, L. Bottou, and F. Bach, “Music source separation in the waveform domain,” arXiv preprint arXiv:1911.13254, 2019.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
- Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, “Language modeling with gated convolutional networks,” in International conference on machine learning. PMLR, 2017, pp. 933–941.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
- Z. Rafii, A. Liutkus, F.-R. Stöter, S. I. Mimilakis, and R. Bittner, “Musdb18-hq - an uncompressed version of musdb18,” Aug. 2019. [Online]. Available: https://doi.org/10.5281/zenodo.3338373
- S. Wisdom, H. Erdogan, D. P. Ellis, R. Serizel, N. Turpault, E. Fonseca, J. Salamon, P. Seetharaman, and J. R. Hershey, “What’s all the fuss about free universal sound separation data?” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 186–190.
- J. Le Roux, S. Wisdom, H. Erdogan, and J. R. Hershey, “SDR – half-baked or well done?” in 2019 ICASSP. IEEE, 2019, pp. 626–630.
- R. H. Hardin and N. J. A. Sloane, “McLaren’s Improved Snub Cube and Other New Spherical Designs in Three Dimensions,” Discrete and Computational Geometry, vol. 15, pp. 429–441, 1996.