Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MusicTM-Dataset for Joint Representation Learning among Sheet Music, Lyrics, and Musical Audio (2012.00290v2)

Published 1 Dec 2020 in cs.SD, cs.DB, cs.IR, cs.MM, and eess.AS

Abstract: This work present a music dataset named MusicTM-Dataset, which is utilized in improving the representation learning ability of different types of cross-modal retrieval (CMR). Little large music dataset including three modalities is available for learning representations for CMR. To collect a music dataset, we expand the original musical notation to synthesize audio and generated sheet-music image, and build musical notation based sheet-music image, audio clip and syllable-denotation text as fine-grained alignment, such that the MusicTM-Dataset can be exploited to receive shared representation for multimodal data points. The MusicTM-Dataset presents 3 kinds of modalities, which consists of the image of sheet-music, the text of lyrics and synthesized audio, their representations are extracted by some advanced models. In this paper, we introduce the background of music dataset and express the process of our data collection. Based on our dataset, we achieve some basic methods for CMR tasks. The MusicTM-Dataset are accessible in https: //github.com/dddzeng/MusicTM-Dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Donghuo Zeng (22 papers)
  2. Yi Yu (223 papers)
  3. Keizo Oyama (7 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.