We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.SD

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Sound

Title: Music Style Transfer With Diffusion Model

Abstract: Previous studies on music style transfer have mainly focused on one-to-one style conversion, which is relatively limited. When considering the conversion between multiple styles, previous methods required designing multiple modes to disentangle the complex style of the music, resulting in large computational costs and slow audio generation. The existing music style transfer methods generate spectrograms with artifacts, leading to significant noise in the generated audio. To address these issues, this study proposes a music style transfer framework based on diffusion models (DM) and uses spectrogram-based methods to achieve multi-to-multi music style transfer. The GuideDiff method is used to restore spectrograms to high-fidelity audio, accelerating audio generation speed and reducing noise in the generated audio. Experimental results show that our model has good performance in multi-mode music style transfer compared to the baseline and can generate high-quality audio in real-time on consumer-grade GPUs.
Comments: 8 pages, 6 figures, ICMC 2023
Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI)
Journal reference: International Computer Music Conference (ICMC 2023) pp. 40-47, October 2023
Cite as: arXiv:2404.14771 [cs.SD]
  (or arXiv:2404.14771v1 [cs.SD] for this version)

Submission history

From: Hong Huang [view email]
[v1] Tue, 23 Apr 2024 06:22:19 GMT (4979kb,D)

Link back to: arXiv, form interface, contact.