Closing the Training/Inference Gap for Deep Attractor Networks (1911.02091v1)
Abstract: This paper improves the deep attractor network (DANet) approach by closing its gap between training and inference. During training, DANet relies on attractors, which are computed from the ground truth separations. As this information is not available at inference time, the attractors have to be estimated, which is typically done by k-means. This results in two mismatches: The first mismatch stems from using classical k-means with Euclidean norm, whereas masks are computed during training using the dot product similarity. By using spherical k-means instead, we can show that we can already improve the performance of DANet. Furthermore, we show that we can fully incorporate k-means clustering into the DANet training. This yields the benefit of having no training/inference gap and consequently results in an scale-invariant signal-to-distortion ratio (SI-SDR) improvement of 1.1dB on the Wall Street Journal corpus (WSJ0).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.