Taming Diffusion Models for Music-driven Conducting Motion Generation. (arXiv:2306.10065v1 [eess.AS])

Generating the motion of orchestral conductors from a given piece of symphony
music is a challenging task since it requires a model to learn semantic music
features and capture the underlying distribution of real conducting motion.
Prior works have applied Generative Adversarial Networks (GAN) to this task,
but the promising diffusion model, which recently showed its advantages in
terms of both training stability and output quality, has not been exploited in
this context. This paper presents Diffusion-Conductor, a novel DDIM-based
approach for music-driven conducting motion generation, which integrates the
diffusion model to a two-stage learning framework. We further propose a random
masking strategy to improve the feature robustness, and use a pair of geometric
loss functions to impose additional regularizations and increase motion
diversity. We also design several novel metrics, including Frechet Gesture
Distance (FGD) and Beat Consistency Score (BC) for a more comprehensive
evaluation of the generated motion. Experimental results demonstrate the
advantages of our model.

DoctorMorDi

DoctorMorDi

Moderator and Editor