Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Effective Fusion Algorithms for Speech Based Self-Supervised Learning Models (2212.10092v1)

Published 20 Dec 2022 in cs.SD and eess.AS

Abstract: Self-supervised learning (SSL) has achieved great success in various areas including speech processing. Recently, it is proven that speech based SSL models are able to extract superior universal representations on a range of downstream tasks compared to traditional hand-craft feature (e.g. FBank, MFCC) in the SUPERB benchmark. However, different types of SSL models might exhibit distinct strengths on different downstream tasks. In order to better utilize the potential power of SSL models, in this work, we explore the effective fusion on multiple SSL models. A series of model fusion algorithms are investigated and compared by combining two types of SSL models, Hubert and Data2vec, on two representative tasks from SUPERB benchmark, which are speaker identification (SID) and automatic speech recognition (ASR) tasks. The experimental results demonstrate that our proposed fusion algorithms can further boost the individual model significantly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Changli Tang (15 papers)
  2. Yujin Wang (17 papers)
  3. Xie Chen (166 papers)
  4. Wei-Qiang Zhang (37 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.