Do You Guys Want to Dance: Zero-Shot Compositional Human Dance Generation with Multiple Persons
Abstract: Human dance generation (HDG) aims to synthesize realistic videos from images and sequences of driving poses. Despite great success, existing methods are limited to generating videos of a single person with specific backgrounds, while the generalizability for real-world scenarios with multiple persons and complex backgrounds remains unclear. To systematically measure the generalizability of HDG models, we introduce a new task, dataset, and evaluation protocol of compositional human dance generation (cHDG). Evaluating the state-of-the-art methods on cHDG, we empirically find that they fail to generalize to real-world scenarios. To tackle the issue, we propose a novel zero-shot framework, dubbed MultiDance-Zero, that can synthesize videos consistent with arbitrary multiple persons and background while precisely following the driving poses. Specifically, in contrast to straightforward DDIM or null-text inversion, we first present a pose-aware inversion method to obtain the noisy latent code and initialization text embeddings, which can accurately reconstruct the composed reference image. Since directly generating videos from them will lead to severe appearance inconsistency, we propose a compositional augmentation strategy to generate augmented images and utilize them to optimize a set of generalizable text embeddings. In addition, consistency-guided sampling is elaborated to encourage the background and keypoints of the estimated clean image at each reverse step to be close to those of the reference image, further improving the temporal consistency of generated videos. Extensive qualitative and quantitative results demonstrate the effectiveness and superiority of our approach.
- Blended diffusion for text-driven editing of natural images. In CVPR, pages 18208–18218, 2022.
- Blended latent diffusion. ACM Transactions on Graphics (TOG), 42(4):1–11, 2023a.
- Spatext: Spatio-textual representation for controllable image generation. In CVPR, pages 18370–18380, 2023b.
- ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022.
- Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, pages 22563–22575, 2023.
- Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465, 2023.
- Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, pages 7291–7299, 2017.
- Emerging properties in self-supervised vision transformers. In ICCV, pages 9650–9660, 2021.
- Everybody dance now. In ICCV, pages 5933–5942, 2019.
- Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1–10, 2023.
- Diffusion posterior sampling for general noisy inverse problems. arXiv preprint arXiv:2209.14687, 2022a.
- Improving diffusion models for inverse problems using manifold constraints. NeurIPS, 35:25683–25696, 2022b.
- Text-to-image diffusion models are zero-shot classifiers. arXiv preprint arXiv:2303.15233, 2023.
- Diffusion models in vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
- Bradley Efron. Tweedie’s formula and selection bias. Journal of the American Statistical Association, 106(496):1602–1614, 2011.
- Structure and content-guided video synthesis with diffusion models. In ICCV, pages 7346–7356, 2023.
- An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022.
- Preserve your own correlation: A noise prior for video diffusion models. In ICCV, pages 22930–22941, 2023.
- Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models. arXiv preprint arXiv:2305.18292, 2023.
- Flexible diffusion modeling of long videos. NeurIPS, 35:27953–27965, 2022.
- Denoising diffusion probabilistic models. NeurIPS, 33:6840–6851, 2020.
- Learning high fidelity depths of dressed humans by watching social media dance videos. In CVPR, pages 12753–12762, 2021.
- Dreampose: Fashion video synthesis with stable diffusion. In ICCV, pages 22680–22690, 2023.
- Imagic: Text-based real image editing with diffusion models. In CVPR, pages 6007–6017, 2023.
- Segment anything. arXiv preprint arXiv:2304.02643, 2023.
- Multi-concept customization of text-to-image diffusion. In CVPR, pages 1931–1941, 2023.
- Cones: Concept neurons in diffusion models for customized generation. arXiv preprint arXiv:2303.05125, 2023.
- Videofusion: Decomposed diffusion models for high-quality video generation. In CVPR, pages 10209–10218, 2023.
- Null-text inversion for editing real images using guided diffusion models. In CVPR, pages 6038–6047, 2023.
- Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
- Improved denoising diffusion probabilistic models. In ICML, pages 8162–8171. PMLR, 2021.
- Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv preprint arXiv:2303.09535, 2023.
- Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763, 2021.
- High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684–10695, 2022.
- An embarrassingly simple approach to zero-shot learning. In ICML, pages 2152–2161. PMLR, 2015.
- Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, pages 22500–22510, 2023.
- Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479–36494, 2022.
- Animating arbitrary objects via deep motion transfer. In CVPR, pages 2377–2386, 2019a.
- First order motion model for image animation. NeurIPS, 32, 2019b.
- Motion representations for articulated animation. In CVPR, pages 13653–13662, 2021.
- Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
- Plug-and-play diffusion features for text-driven image-to-image translation. In CVPR, pages 1921–1930, 2023.
- Unsplash. https://unsplash.com/.
- Disco: Disentangled control for referring human dance generation in real world. arXiv preprint arXiv:2307.00040, 2023.
- Video-to-video synthesis. arXiv preprint arXiv:1808.06601, 2018.
- Few-shot video-to-video synthesis. arXiv preprint arXiv:1910.12713, 2019.
- Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In ICCV, pages 7623–7633, 2023.
- Zero-shot learning-the good, the bad and the ugly. In CVPR, pages 4582–4591, 2017.
- Paint by example: Exemplar-based image editing with diffusion models. In CVPR, pages 18381–18391, 2023.
- Video probabilistic diffusion models in projected latent space. In CVPR, pages 18456–18466, 2023.
- Adding conditional control to text-to-image diffusion models. In ICCV, pages 3836–3847, 2023a.
- Sine: Single image editing with text-to-image diffusion models. In CVPR, pages 6027–6037, 2023b.
- Thin-plate spline motion model for image animation. In CVPR, pages 3657–3666, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.