Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Study of Transfer Learning in Music Source Separation (2010.12650v1)

Published 23 Oct 2020 in cs.SD and cs.LG

Abstract: Supervised deep learning methods for performing audio source separation can be very effective in domains where there is a large amount of training data. While some music domains have enough data suitable for training a separation system, such as rock and pop genres, many musical domains do not, such as classical music, choral music, and non-Western music traditions. It is well known that transferring learning from related domains can result in a performance boost for deep learning systems, but it is not always clear how best to do pretraining. In this work we investigate the effectiveness of data augmentation during pretraining, the impact on performance as a result of pretraining and downstream datasets having similar content domains, and also explore how much of a model must be retrained on the final target task, once pretrained.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andreas Bugler (1 paper)
  2. Bryan Pardo (30 papers)
  3. Prem Seetharaman (26 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.