Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Spatial Pyramid Encoding with Convex Length Normalization for Text-Independent Speaker Verification (1906.08333v1)

Published 19 Jun 2019 in eess.AS, cs.CL, cs.LG, cs.SD, and stat.ML

Abstract: In this paper, we propose a new pooling method called spatial pyramid encoding (SPE) to generate speaker embeddings for text-independent speaker verification. We first partition the output feature maps from a deep residual network (ResNet) into increasingly fine sub-regions and extract speaker embeddings from each sub-region through a learnable dictionary encoding layer. These embeddings are concatenated to obtain the final speaker representation. The SPE layer not only generates a fixed-dimensional speaker embedding for a variable-length speech segment, but also aggregates the information of feature distribution from multi-level temporal bins. Furthermore, we apply deep length normalization by augmenting the loss function with ring loss. By applying ring loss, the network gradually learns to normalize the speaker embeddings using model weights themselves while preserving convexity, leading to more robust speaker embeddings. Experiments on the VoxCeleb1 dataset show that the proposed system using the SPE layer and ring loss-based deep length normalization outperforms both i-vector and d-vector baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Youngmoon Jung (18 papers)
  2. Younggwan Kim (3 papers)
  3. Hyungjun Lim (6 papers)
  4. Yeunju Choi (10 papers)
  5. Hoirin Kim (28 papers)
Citations (32)

Summary

We haven't generated a summary for this paper yet.