Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Musical Word Embedding: Bridging the Gap between Listening Contexts and Music (2008.01190v1)

Published 23 Jul 2020 in cs.IR, cs.LG, cs.MM, and stat.ML

Abstract: Word embedding pioneered by Mikolov et al. is a staple technique for word representations in NLP research which has also found popularity in music information retrieval tasks. Depending on the type of text data for word embedding, however, vocabulary size and the degree of musical pertinence can significantly vary. In this work, we (1) train the distributed representation of words using combinations of both general text data and music-specific data and (2) evaluate the system in terms of how they associate listening contexts with musical compositions.

Citations (4)

Summary

We haven't generated a summary for this paper yet.