Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SpeechCLIP+: Self-supervised multi-task representation learning for speech via CLIP and speech-image data (2402.06959v1)

Published 10 Feb 2024 in cs.CL, cs.SD, and eess.AS

Abstract: The recently proposed visually grounded speech model SpeechCLIP is an innovative framework that bridges speech and text through images via CLIP without relying on text transcription. On this basis, this paper introduces two extensions to SpeechCLIP. First, we apply the Continuous Integrate-and-Fire (CIF) module to replace a fixed number of CLS tokens in the cascaded architecture. Second, we propose a new hybrid architecture that merges the cascaded and parallel architectures of SpeechCLIP into a multi-task learning framework. Our experimental evaluation is performed on the Flickr8k and SpokenCOCO datasets. The results show that in the speech keyword extraction task, the CIF-based cascaded SpeechCLIP model outperforms the previous cascaded SpeechCLIP model using a fixed number of CLS tokens. Furthermore, through our hybrid architecture, cascaded task learning boosts the performance of the parallel branch in image-speech retrieval tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hsuan-Fu Wang (4 papers)
  2. Yi-Jen Shih (10 papers)
  3. Heng-Jui Chang (16 papers)
  4. Layne Berry (6 papers)
  5. Puyuan Peng (21 papers)
  6. Hung-yi Lee (327 papers)
  7. Hsin-Min Wang (97 papers)
  8. David Harwath (55 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com