Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Significance of Skeleton-based Features in Virtual Try-On (2208.08076v3)

Published 17 Aug 2022 in cs.CV

Abstract: The idea of \textit{Virtual Try-ON} (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home. In general, most of the existing VTON methods produce inconsistent results when a person posing with his arms folded i.e., bent or crossed, wants to try an outfit. The problem becomes severe in the case of long-sleeved outfits. As then, for crossed arm postures, overlap among different clothing parts might happen. The existing approaches, especially the warping-based methods employing \textit{Thin Plate Spline (TPS)} transform can not tackle such cases. To this end, we attempt a solution approach where the clothing from the source person is segmented into semantically meaningful parts and each part is warped independently to the shape of the person. To address the bending issue, we employ hand-crafted geometric features consistent with human body geometry for warping the source outfit. In addition, we propose two learning-based modules: a synthesizer network and a mask prediction network. All these together attempt to produce a photo-realistic, pose-robust VTON solution without requiring any paired training data. Comparison with some of the benchmark methods clearly establishes the effectiveness of the approach.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. G. Pons-Moll, S. Pujades, S. Hu, and M. J. Black, “Clothcap: Seamless 4d clothing capture and retargeting,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 73, 2017.
  2. M. Sekine, K. Sugita, F. Perbet, B. Stenger, and M. Nishiyama, “Virtual fitting by single-shot body shape estimation,” in Int. Conf. on 3D Body Scanning Technologies.   Citeseer, 2014, pp. 406–413.
  3. A. Mir, T. Alldieck, and G. Pons-Moll, “Learning to transfer texture from clothing images to 3d humans,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7023–7034.
  4. B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll, “Multi-garment net: Learning to dress 3d people from images,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5420–5430.
  5. M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” ACM transactions on graphics (TOG), vol. 34, no. 6, pp. 1–16, 2015.
  6. B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang, “Toward characteristic-preserving image-based virtual try-on network,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 589–604.
  7. X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, “Viton: An image-based virtual try-on network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7543–7552.
  8. D. Roy, S. Santra, and B. Chanda, “Lgvton: a landmark guided approach for model to person virtual try-on,” Multimedia Tools and Applications, pp. 1–37, 2022.
  9. ——, “Incorporating human body shape guidance for cloth warping in model to person virtual try-on problems,” 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), pp. 1–6, 2020.
  10. R. Yu, X. Wang, and X. Xie, “Vtnfp: An image-based virtual try-on network with body and clothing feature preservation,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 10 511–10 520.
  11. H. Dong, X. Liang, X. Shen, B. Wang, H. Lai, J. Zhu, Z. Hu, and J. Yin, “Towards multi-pose guided virtual try-on network,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9026–9035.
  12. X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803.
  13. C.-W. Hsieh, C.-Y. Chen, C.-L. Chou, H.-H. Shuai, and W.-H. Cheng, “Fit-me: Image-based virtual try-on with arbitrary poses,” in 2019 IEEE International Conference on Image Processing (ICIP).   IEEE, 2019, pp. 4694–4698.
  14. C.-W. Hsieh, C.-Y. Chen, C.-L. Chou, H.-H. Shuai, J. Liu, and W.-H. Cheng, “Fashionon: Semantic-guided image-based virtual try-on with detailed human and clothing information,” in Proceedings of the 27th ACM International Conference on Multimedia.   ACM, 2019, pp. 275–283.
  15. H. Yang, R. Zhang, X. Guo, W. Liu, W. Zuo, and P. Luo, “Towards photo-realistic virtual try-on by adaptively generating-preserving image content,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  16. A. H. Raffiee and M. Sollami, “Garmentgan: Photo-realistic adversarial fashion transfer,” in 2020 25th International Conference on Pattern Recognition (ICPR).   IEEE, 2021, pp. 3923–3930.
  17. S. Jandial, A. Chopra, K. Ayush, M. Hemani, B. Krishnamurthy, and A. Halwai, “Sievenet: A unified framework for robust image-based virtual try-on,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 2182–2190.
  18. J. A. Maintz and M. A. Viergever, “A survey of medical image registration,” Medical image analysis, vol. 2, no. 1, pp. 1–36, 1998.
  19. F. L. Bookstein, “Principal warps: Thin-plate splines and the decomposition of deformations,” IEEE Transactions on pattern analysis and machine intelligence, vol. 11, no. 6, pp. 567–585, 1989.
  20. S. He, Y.-Z. Song, and T. Xiang, “Style-based global appearance flow for virtual try-on,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3470–3479.
  21. Z. Wu, G. Lin, Q. Tao, and J. Cai, “M2e-try on net: Fashion from model to everyone,” in Proceedings of the 27th ACM International Conference on Multimedia.   ACM, 2019, pp. 293–301.
  22. T. Liu, J. Zhang, X. Nie, Y. Wei, S. Wei, Y. Zhao, and J. Feng, “Spatial-aware texture transformer for high-fidelity garment transfer,” IEEE Transactions on Image Processing, vol. 30, pp. 7499–7510, 2021.
  23. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  24. N. Jetchev and U. Bergmann, “The conditional analogy gan: Swapping fashion articles on people images,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2287–2292.
  25. G. Yildirim, N. Jetchev, R. Vollgraf, and U. Bergmann, “Generating high-resolution fashion model images wearing custom outfits,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0.
  26. Y. Men, Y. Mao, Y. Jiang, W.-Y. Ma, and Z. Lian, “Controllable person image synthesis with attribute-decomposed gan,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5084–5093.
  27. K. M. Lewis, S. Varadharajan, and I. Kemelmacher-Shlizerman, “Tryongan: body-aware try-on via layered interpolation,” ACM Transactions on Graphics (TOG), vol. 40, no. 4, pp. 1–10, 2021.
  28. A. Chopra, R. Jain, M. Hemani, and B. Krishnamurthy, “Zflow: Gated appearance flow-based virtual try-on with 3d priors,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5433–5442.
  29. Y. Ge, Y. Song, R. Zhang, C. Ge, W. Liu, and P. Luo, “Parser-free virtual try-on via distilling appearance flows,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8485–8493.
  30. X. Han, X. Hu, W. Huang, and M. R. Scott, “Clothflow: A flow-based model for clothed person generation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 10 471–10 480.
  31. T. Beier and S. Neely, “Feature-based image metamorphosis,” ACM SIGGRAPH computer graphics, vol. 26, no. 2, pp. 35–42, 1992.
  32. R. Alp Güler, N. Neverova, and I. Kokkinos, “Densepose: Dense human pose estimation in the wild,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7297–7306.
  33. Z. Xie, Z. Huang, F. Zhao, H. Dong, M. Kampffmeyer, and X. Liang, “Towards scalable unpaired virtual try-on via patch-routed spatially-adaptive gan,” Advances in Neural Information Processing Systems, vol. 34, pp. 2598–2610, 2021.
  34. W. Liu, Z. Piao, J. Min, W. Luo, L. Ma, and S. Gao, “Liquid warping gan: A unified framework for human motion imitation, appearance transfer and novel view synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5904–5913.
  35. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410.
  36. K. Gong, X. Liang, D. Zhang, X. Shen, and L. Lin, “Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 932–940.
  37. K. Lin, L. Wang, K. Luo, Y. Chen, Z. Liu, and M.-T. Sun, “Cross-domain complementary learning using pose for multi-person part segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, 2020.
  38. F. Malagelada, M. Dalmau-Pastor, J. Vega, and P. Golano, “Elbow anatomy,” Sports injuries: prevention, diagnosis, treatment and rehabilitation, vol. 2, pp. 527–53, 2014.
  39. V. Masteikaitě, V. Sacevičienė, and V. Čironienė, “Compressed loop method for the bending behaviour of coated and laminated fabrics analysis,” Journal of Industrial Textiles, vol. 43, no. 3, pp. 350–365, 2014.
  40. Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  41. G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro, “Image inpainting for irregular holes using partial convolutions,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 85–100.
  42. Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1096–1104.
  43. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” in Advances in Neural Information Processing Systems, 2017, pp. 6626–6637.
  44. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
  45. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
  46. Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli et al., “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Debapriya Roy (5 papers)
  2. Sanchayan Santra (8 papers)
  3. Diganta Mukherjee (21 papers)
  4. Bhabatosh Chanda (11 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.