Music ControlNet:
Multiple Time-varying Controls for Music Generation


Shih-Lun Wu1,2*   Chris Donahue1   Shinji Watanabe1   Nicholas J. Bryan2  

1School of Computer Science, Carnegie Mellon University
2Adobe Research
*Work done during an internship at Adobe Research

Paper Video

Abstract


Text-to-music generation models are now capable of generating high-quality music audio in broad styles. However, text control is primarily suitable for the manipulation of global musical attributes like genre, mood, and tempo, and is less suitable for precise control over time-varying attributes such as the positions of beats in time or the changing dynamics of the music. We propose Music ControlNet, a diffusion-based music generation model that offers multiple precise, time-varying controls over generated audio. To imbue text-to-music models with time-varying control, we propose an approach analogous to pixel-wise control of the image-domain ControlNet method. Specifically, we extract controls from training audio yielding paired data, and fine-tune a diffusion-based conditional generative model over audio spectrograms given melody, dynamics, and rhythm controls. While the image-domain Uni-ControlNet method already allows generation with any subset of controls, we devise a new strategy to allow creators to input controls that are only partially specified in time. We evaluate both on controls extracted from audio and controls we expect creators to provide, demonstrating that we can generate realistic music that corresponds to control inputs in both settings. While few comparable music generation models exist, we benchmark against MusicGen, a recent model that accepts text and melody input, and show that our model generates music that is 49% more faithful to input melodies despite having 35x fewer parameters, training on 11x less data, and enabling two additional forms of time-varying control.


Bibtex

        
        @article{Wu2023MusicControlNet,
            title={Music ControlNet: Multiple Time-varying Controls for Music Generation}, 
            author={Wu, Shih-Lun and Donahue, Chris and Watanabe, Shinji and Bryan, Nicholas J.},
            year={2023},
            eprint={TBD},
            archivePrefix={arXiv},
            primaryClass={cs.SD}
        }
                  

Examples (Cherry-picked)

Please find generated music and feature plots with different control combinations including melody, dynamics, and rhythm, and their combinations as well as partially-specified controls over time. The examples here are mildly cherry-picked to show the best results. For random (non-cherry-picked) examples, please see the section below.

For each example, the first row of plots are controls extracted from generation, and the second row ones are input controls. Examples with gray shaded regions denote partially-specified controls, where the gray region is not enforced. A melody reference file is also provided for examples that includes melody control.
Melody Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Dynamics Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody & Rhythm Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Dynamics & Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody & Dynamics Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody, Dynamics, & Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Melody Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Dynamics Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features

Examples (Random)

Please find generated music and feature plots with different control combinations including melody, dynamics, and rhythm, and their combinations as well as partially-specified controls over time. The examples here randomly generated.

For each example, the first row of plots are controls extracted from generation, and the second row ones are input controls. Examples with gray shaded regions denote partially-specified controls, where the gray region is not enforced. A melody reference file is also provided for examples that includes melody control.
Melody Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Dynamics Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody & Rhythm Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Dynamics & Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody & Dynamics Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Melody, Dynamics, & Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Melody Control
Reference
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Dynamics Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features
Partial Rhythm Control
Generated Music
Feature Plots
Features
Features
Features
Features
Features
Features
Features
Features
Features
Features

Acknowledgements


Thank you to Ge Zhu, Juan-Pablo Caceres, Zhiyao Duan, and Nicholas J. Bryan for sharing their soon-to-be published high-fidelity vocoder work used for the demo video (a full citation will be updated soon).