A Two-stage Personalized Virtual Try-on Framework with Shape Control and Texture Guidance (2312.15480v1)
Abstract: The Diffusion model has a strong ability to generate wild images. However, the model can just generate inaccurate images with the guidance of text, which makes it very challenging to directly apply the text-guided generative model for virtual try-on scenarios. Taking images as guiding conditions of the diffusion model, this paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes. Specifically, the proposed model adaptively matches the clothing to human body parts through the Shape Control Module (SCM) to mitigate the misalignment of the clothing and the human body parts. The semantic information of the input clothing is parsed by the Texture Guided Module (TGM), and the corresponding texture is generated by directional guidance. Therefore, this model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods. Meanwhile, the model can automatically enhance the generated clothing folds and textures according to the human posture, and improve the authenticity of virtual try-on. In this paper, qualitative and quantitative experiments are carried out on high-resolution paired and unpaired datasets, the results show that the proposed model outperforms the state-of-the-art model.
- J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
- Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” Advances in neural information processing systems, vol. 32, 2019.
- Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochastic differential equations,” arXiv preprint arXiv:2011.13456, 2020.
- S. He, Y.-Z. Song, and T. Xiang, “Style-based global appearance flow for virtual try-on,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3470–3479, 2022.
- K. Sarkar, L. Liu, V. Golyanik, and C. Theobalt, “Humangan: A generative model of human images,” in 2021 International Conference on 3D Vision (3DV), pp. 258–267, IEEE, 2021.
- K. M. Lewis, S. Varadharajan, and I. Kemelmacher-Shlizerman, “Tryongan: Body-aware try-on via layered interpolation,” ACM Transactions on Graphics (TOG), vol. 40, no. 4, pp. 1–10, 2021.
- J. Fu, S. Li, Y. Jiang, K.-Y. Lin, C. Qian, C. C. Loy, W. Wu, and Z. Liu, “Stylegan-human: A data-centric odyssey of human generation,” in European Conference on Computer Vision, pp. 1–19, Springer, 2022.
- Y. Jiang, S. Yang, H. Qiu, W. Wu, C. C. Loy, and Z. Liu, “Text2human: Text-driven controllable human image generation,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–11, 2022.
- J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020.
- K. Zhang, M. Sun, J. Sun, B. Zhao, K. Zhang, Z. Sun, and T. Tan, “Humandiffusion: a coarse-to-fine alignment diffusion framework for controllable text-driven person image generation,” arXiv preprint arXiv:2211.06235, 2022.
- M. Pernuš, C. Fookes, V. Štruc, and S. Dobrišek, “Fice: Text-conditioned fashion image editing with guided gan inversion,” arXiv preprint arXiv:2301.02110, 2023.
- A. K. Bhunia, S. Khan, H. Cholakkal, R. M. Anwer, J. Laaksonen, M. Shah, and F. S. Khan, “Person image synthesis via denoising diffusion model,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5968–5976, 2023.
- B. Yang, S. Gu, B. Zhang, T. Zhang, X. Chen, X. Sun, D. Chen, and F. Wen, “Paint by example: Exemplar-based image editing with diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18381–18391, 2023.
- J. Jeong, M. Kwon, and Y. Uh, “Training-free style transfer emerges from h-space in diffusion models,” arXiv preprint arXiv:2303.15403, 2023.
- Z. Xie, Z. Huang, F. Zhao, H. Dong, M. Kampffmeyer, X. Dong, F. Zhu, and X. Liang, “Pasta-gan++: A versatile framework for high-resolution unpaired virtual try-on,” arXiv preprint arXiv:2207.13475, 2022.
- Z. Xie, Z. Huang, F. Zhao, H. Dong, M. Kampffmeyer, and X. Liang, “Towards scalable unpaired virtual try-on via patch-routed spatially-adaptive gan,” Advances in Neural Information Processing Systems, vol. 34, pp. 2598–2610, 2021.
- A. Cui, D. McKee, and S. Lazebnik, “Dressing in order: Recurrent person image generation for pose transfer, virtual try-on and outfit editing,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 14638–14647, 2021.
- Y. Men, Y. Mao, Y. Jiang, W.-Y. Ma, and Z. Lian, “Controllable person image synthesis with attribute-decomposed gan,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5084–5093, 2020.
- S. Zhang, X. Han, W. Zhang, X. Lan, H. Yao, and Q. Huang, “Limb-aware virtual try-on network with progressive clothing warping,” IEEE Transactions on Multimedia, 2023.
- J. Xu, Y. Pu, R. Nie, D. Xu, Z. Zhao, and W. Qian, “Virtual try-on network with attribute transformation and local rendering,” IEEE Transactions on Multimedia, vol. 23, pp. 2222–2234, 2021.
- A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen, “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” arXiv preprint arXiv:2112.10741, 2021.
- A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” arXiv preprint arXiv:2204.06125, vol. 1, no. 2, p. 3, 2022.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684–10695, 2022.
- A. Blattmann, R. Rombach, K. Oktay, J. Müller, and B. Ommer, “Retrieval-augmented diffusion models,” Advances in Neural Information Processing Systems, vol. 35, pp. 15309–15324, 2022.
- G. Kim, T. Kwon, and J. C. Ye, “Diffusionclip: Text-guided diffusion models for robust image manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2426–2435, 2022.
- N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman, “Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22500–22510, 2023.
- B. Kawar, S. Zada, O. Lang, O. Tov, H. Chang, T. Dekel, I. Mosseri, and M. Irani, “Imagic: Text-based real image editing with diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6007–6017, 2023.
- O. Avrahami, D. Lischinski, and O. Fried, “Blended diffusion for text-driven editing of natural images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18208–18218, 2022.
- B. Li, K. Xue, B. Liu, and Y.-K. Lai, “Bbdm: Image-to-image translation with brownian bridge diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1952–1961, 2023.
- X. Wu, C. Wang, H. Fu, A. Shamir, and S.-H. Zhang, “Deepportraitdrawing: Generating human body images from freehand sketches,” Computers & Graphics, 2023.
- H. Chung, B. Sim, D. Ryu, and J. C. Ye, “Improving diffusion models for inverse problems using manifold constraints,” Advances in Neural Information Processing Systems, vol. 35, pp. 25683–25696, 2022.
- A. Lugmayr, M. Danelljan, A. Romero, F. Yu, R. Timofte, and L. Van Gool, “Repaint: Inpainting using denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11461–11471, 2022.
- C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, “Palette: Image-to-image diffusion models,” in ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10, 2022.
- R. Gal, Y. Alaluf, Y. Atzmon, O. Patashnik, A. H. Bermano, G. Chechik, and D. Cohen-Or, “An image is worth one word: Personalizing text-to-image generation using textual inversion,” arXiv preprint arXiv:2208.01618, 2022.
- S. Lee, S. Lee, and J. Lee, “Towards detailed characteristic-preserving virtual try-on,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2236–2240, 2022.
- C. Ge, Y. Song, Y. Ge, H. Yang, W. Liu, and P. Luo, “Disentangled cycle consistency for highly-realistic virtual try-on,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16928–16937, 2021.
- Y. Ge, Y. Song, R. Zhang, C. Ge, W. Liu, and P. Luo, “Parser-free virtual try-on via distilling appearance flows,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8485–8493, 2021.
- X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, “Viton: An image-based virtual try-on network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7543–7552, 2018.
- M. R. Minar, T. T. Tuan, H. Ahn, P. Rosin, and Y.-K. Lai, “Cp-vton+: Clothing shape and texture preserving image-based virtual try-on,” in CVPR Workshops, vol. 3, pp. 10–14, 2020.
- H. Yang, R. Zhang, X. Guo, W. Liu, W. Zuo, and P. Luo, “Towards photo-realistic virtual try-on by adaptively generating-preserving image content,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7850–7859, 2020.
- X. Han, X. Hu, W. Huang, and M. R. Scott, “Clothflow: A flow-based model for clothed person generation,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10471–10480, 2019.
- R. Yu, X. Wang, and X. Xie, “Vtnfp: An image-based virtual try-on network with body and clothing feature preservation,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10511–10520, 2019.
- Z. Xing, Y. Wu, S. Liu, S. Di, and H. Ma, “Virtual try-on with garment self-occlusion conditions,” IEEE Transactions on Multimedia, 2022.
- S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE transactions on pattern analysis and machine intelligence, vol. 24, no. 4, pp. 509–522, 2002.
- B. Hu, P. Liu, Z. Zheng, and M. Ren, “Spg-vton: Semantic prediction guidance for multi-pose virtual try-on,” IEEE Transactions on Multimedia, vol. 24, pp. 1233–1246, 2022.
- Z. Yang, J. Chen, Y. Shi, H. Li, T. Chen, and L. Lin, “Occlumix: Towards de-occlusion virtual try-on by semantically-guided mixup,” IEEE Transactions on Multimedia, 2023.
- A. Neuberger, E. Borenstein, B. Hilleli, E. Oks, and S. Alpert, “Image based virtual try-on network from unpaired data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5184–5193, 2020.