Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sampling strategies in Siamese Networks for unsupervised speech representation learning (1804.11297v2)

Published 30 Apr 2018 in cs.CL and cs.LG

Abstract: Recent studies have investigated siamese network architectures for learning invariant speech representations using same-different side information at the word level. Here we investigate systematically an often ignored component of siamese networks: the sampling procedure (how pairs of same vs. different tokens are selected). We show that sampling strategies taking into account Zipf's Law, the distribution of speakers and the proportions of same and different pairs of words significantly impact the performance of the network. In particular, we show that word frequency compression improves learning across a large range of variations in number of training pairs. This effect does not apply to the same extent to the fully unsupervised setting, where the pairs of same-different words are obtained by spoken term discovery. We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on state-of-the-art in unsupervised representation learning using siamese networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rachid Riad (14 papers)
  2. Corentin Dancette (14 papers)
  3. Julien Karadayi (8 papers)
  4. Neil Zeghidour (39 papers)
  5. Thomas Schatz (5 papers)
  6. Emmanuel Dupoux (81 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.