Multitrack Music Transcription with a Time-Frequency Perceiver (2306.10785v1)
Abstract: Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously. It is a very challenging task that typically requires a more complex model to achieve satisfactory result. In addition, prior works mostly focus on transcriptions of regular instruments, however, neglecting vocals, which are usually the most important signal source if present in a piece of music. In this paper, we propose a novel deep neural network architecture, Perceiver TF, to model the time-frequency representation of audio input for multitrack transcription. Perceiver TF augments the Perceiver architecture by introducing a hierarchical expansion with an additional Transformer layer to model temporal coherence. Accordingly, our model inherits the benefits of Perceiver that posses better scalability, allowing it to well handle transcriptions of many instruments in a single model. In experiments, we train a Perceiver TF to model 12 instrument classes as well as vocal in a multi-task learning manner. Our result demonstrates that the proposed system outperforms the state-of-the-art counterparts (e.g., MT3 and SpecTNT) on various public datasets.
- “Reconvat: A semi-supervised automatic music transcription framework for low-resource real-world data,” in Proc. ACM Multimedia, 2021, pp. 3918–3926.
- “MT3: Multi-task multitrack music transcription,” in Proc. ICLR, 2021.
- “Perceiver: General perception with iterative attention,” in Proc. ICML, 2021, pp. 4651–4664.
- “SpecTNT: A time-frequency transformer for music audio,” in Proc. ISMIR, 2021.
- “Improving music source separation based on deep neural networks through data augmentation and network blending,” in Proc. ICASSP, 2017, pp. 261–265.
- “Catnet: Music source separation system with mix-audio augmentation,” arXiv preprint arXiv:2102.09966, 2021.
- “An overview of lead and accompaniment separation in music,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 8, pp. 1307–1335, 2018.
- “Multitask learning for frame-level instrument recognition,” in Proc. ICASSP, 2019, pp. 381–385.
- “Joint singing voice separation and f0 estimation with deep u-net architectures,” in Proc. EUSIPCO, 2019, pp. 1–5.
- “Multi-instrument automatic music transcription with self-attention-based instance segmentation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 2796–2809, 2020.
- “Multi-instrument music transcription based on deep spherical clustering of spectrograms and pitchgrams,” in Proc. ISMIR, 2020.
- “Jointist: Joint learning for multi-instrument transcription and its applications,” arXiv preprint arXiv:2206.10805, 2022.
- “On the preparation and validation of a large-scale dataset of singing transcription,” in Proc. ICASSP, 2021, pp. 276–280.
- “Pseudo-label transfer from frame-level to note-level in a teacher-student framework for singing transcription from polyphonic music,” in Proc. ICASSP, 2022.
- Jui-Yang Hsu and Li Su, “Vocano: A note transcription framework for singing voice in polyphonic music.,” in Proc. ISMIR, 2021, pp. 293–300.
- “Conformer: Convolution-augmented transformer for speech recognition,” in Proc. INTERSPEECH, 2020.
- “Semi-supervised music tagging transformer,” in Proc. ISMIR, 2021, pp. 769–776.
- “Modeling beats and downbeats with a time-frequency transformer,” in Proc. ICASSP, 2022, pp. 401–405.
- “To catch a chorus, verse, intro, or anything else: Analyzing a song with structural functions,” in Proc. ICASSP, 2022, pp. 416–420.
- “Identity mappings in deep residual networks,” in European conference on computer vision. Springer, 2016, pp. 630–645.
- “Onsets and frames: Dual-objective piano transcription,” in Proc. ISMIR, 2018, pp. 50–57.
- “Empirical evaluation of gated recurrent neural networks on sequence modeling,” in Proc. NeurIPS, 2014.
- “Cutting music source separation some Slakh: A dataset to study the impact of training data quality and quantity,” in Proc. WASPAA. IEEE, 2019.
- Colin Raffel, “Learning-based methods for comparing sequences, with applications to audio-to-midi alignment and matching,” 2016, Columbia University.
- “Enabling factorized piano music modeling and generation with the maestro dataset,” in Proc. ICLR, 2019.
- “Guitarset: A dataset for guitar transcription.,” in Proc. ISMIR, 2018, pp. 453–460.
- “Joint detection and classification of singing voice melody using convolutional recurrent neural networks,” Applied Sciences, vol. 9, no. 7, pp. 1324, 2019.
- “Decoupling magnitude and phase estimation with deep resunet for music source separation.,” in Proc. ISMIR, 2021.
- Paszke et al., “Pytorch: An imperative style, high-performance deep learning library,” in Neural Information Processing Systems, 2019, vol. 32.
- “Decoupled weight decay regularization,” in Proc. ICLR, 2017.
- “Evaluating automatic polyphonic music transcription.,” in Proc. ISMIR, 2018, pp. 42–49.