Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conditioning Deep Generative Raw Audio Models for Structured Automatic Music (1806.09905v1)

Published 26 Jun 2018 in cs.SD, cs.LG, eess.AS, and stat.ML

Abstract: Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind's WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rachel Manzelli (3 papers)
  2. Vijay Thakkar (4 papers)
  3. Ali Siahkamari (4 papers)
  4. Brian Kulis (33 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.