Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Impact of Background Removal on Performance of Neural Networks for Fashion Image Classification and Segmentation (2308.09764v2)

Published 18 Aug 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Fashion understanding is a hot topic in computer vision, with many applications having great business value in the market. Fashion understanding remains a difficult challenge for computer vision due to the immense diversity of garments and various scenes and backgrounds. In this work, we try removing the background from fashion images to boost data quality and increase model performance. Having fashion images of evident persons in fully visible garments, we can utilize Salient Object Detection to achieve the background removal of fashion data to our expectations. A fashion image with the background removed is claimed as the "rembg" image, contrasting with the original one in the fashion dataset. We conducted extensive comparative experiments with these two types of images on multiple aspects of model training, including model architectures, model initialization, compatibility with other training tricks and data augmentations, and target task types. Our experiments show that background removal can effectively work for fashion data in simple and shallow networks that are not susceptible to overfitting. It can improve model accuracy by up to 5% in the classification on the FashionStyle14 dataset when training models from scratch. However, background removal does not perform well in deep neural networks due to incompatibility with other regularization techniques like batch normalization, pre-trained initialization, and data augmentations introducing randomness. The loss of background pixels invalidates many existing training tricks in the model training, adding the risk of overfitting for deep models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. W. Wang, Q. Lai, H. Fu, J. Shen, H. Ling, and R. Yang, “Salient Object Detection in the Deep Learning Era: An In-Depth Survey,” arXiv:1904.09146 [cs], Jan. 2021, arXiv: 1904.09146. [Online]. Available: http://arxiv.org/abs/1904.09146
  2. M. Takagi, E. Simo-Serra, S. Iizuka, and H. Ishikawa, “What Makes a Style: Experimental Analysis of Fashion Prediction,” in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).   Venice: IEEE, Oct. 2017, pp. 2247–2253. [Online]. Available: http://ieeexplore.ieee.org/document/8265473/
  3. M. Jia, M. Shi, M. Sirotenko, Y. Cui, C. Cardie, B. Hariharan, H. Adam, and S. Belongie, “Fashionpedia: Ontology, Segmentation, and an Attribute Localization Dataset,” arXiv:2004.12276 [cs, eess], Jul. 2020, arXiv: 2004.12276 version: 2. [Online]. Available: http://arxiv.org/abs/2004.12276
  4. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.
  5. V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv:1603.07285 [cs, stat], Jan. 2018, arXiv: 1603.07285. [Online]. Available: http://arxiv.org/abs/1603.07285
  6. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv:1409.1556 [cs], Apr. 2015, arXiv: 1409.1556 version: 6. [Online]. Available: http://arxiv.org/abs/1409.1556
  7. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” arXiv:1512.03385 [cs], Dec. 2015, arXiv: 1512.03385 version: 1. [Online]. Available: http://arxiv.org/abs/1512.03385
  8. H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, Y. Sun, T. He, J. Mueller, R. Manmatha, M. Li, and A. Smola, “ResNeSt: Split-Attention Networks,” arXiv:2004.08955 [cs], Dec. 2020, arXiv: 2004.08955 version: 2. [Online]. Available: http://arxiv.org/abs/2004.08955
  9. X. Qin, Z. Zhang, C. Huang, M. Dehghan, O. R. Zaiane, and M. Jagersand, “U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection,” Pattern Recognition, vol. 106, p. 107404, Oct. 2020, arXiv: 2005.09007 version: 2. [Online]. Available: http://arxiv.org/abs/2005.09007
  10. D. Gatis, “Rembg,” Feb. 2022, original-date: 2020-08-10T14:38:24Z. [Online]. Available: https://github.com/danielgatis/rembg
  11. “Openmmlab: Officially endorsed projects.” [Online]. Available: https://openmmlab.com/codebase
  12. MMClassification Contributors, “OpenMMLab’s Image Classification Toolbox and Benchmark,” Jul. 2020, original-date: 2020-07-09T16:25:04Z. [Online]. Available: https://github.com/open-mmlab/mmclassification
  13. MMDetection Contributors, “OpenMMLab Detection Toolbox and Benchmark,” Aug. 2018, original-date: 2018-08-22T07:06:06Z. [Online]. Available: https://github.com/open-mmlab/mmdetection
  14. MMSegmentation Contributors, “OpenMMLab Semantic Segmentation Toolbox and Benchmark,” Jul. 2020, original-date: 2020-06-14T04:32:33Z. [Online]. Available: https://github.com/open-mmlab/mmsegmentation
  15. E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “RandAugment: Practical automated data augmentation with a reduced search space,” arXiv:1909.13719 [cs], Nov. 2019, arXiv: 1909.13719 version: 2. [Online]. Available: http://arxiv.org/abs/1909.13719
  16. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs], Mar. 2015, arXiv: 1502.03167 version: 3. [Online]. Available: http://arxiv.org/abs/1502.03167
  17. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” arXiv:2103.14030 [cs], Aug. 2021, arXiv: 2103.14030 version: 2. [Online]. Available: http://arxiv.org/abs/2103.14030
  18. Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, F. Wei, and B. Guo, “Swin Transformer V2: Scaling Up Capacity and Resolution,” arXiv:2111.09883 [cs], Nov. 2021, arXiv: 2111.09883 version: 1. [Online]. Available: http://arxiv.org/abs/2111.09883
  19. I. Loshchilov and F. Hutter, “SGDR: Stochastic Gradient Descent with Warm Restarts,” arXiv:1608.03983 [cs, math], May 2017, arXiv: 1608.03983. [Online]. Available: http://arxiv.org/abs/1608.03983
  20. ——, “Decoupled Weight Decay Regularization,” arXiv:1711.05101 [cs, math], Jan. 2019, arXiv: 1711.05101 version: 3. [Online]. Available: http://arxiv.org/abs/1711.05101
  21. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” arXiv:2010.11929 [cs], Jun. 2021, arXiv: 2010.11929 version: 2. [Online]. Available: http://arxiv.org/abs/2010.11929
  22. T. DeVries and G. W. Taylor, “Improved Regularization of Convolutional Neural Networks with Cutout,” arXiv:1708.04552 [cs], Nov. 2017, arXiv: 1708.04552 version: 2. [Online]. Available: http://arxiv.org/abs/1708.04552
  23. S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features,” arXiv:1905.04899 [cs], Aug. 2019, arXiv: 1905.04899 version: 2. [Online]. Available: http://arxiv.org/abs/1905.04899
  24. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond Empirical Risk Minimization,” arXiv:1710.09412 [cs, stat], Apr. 2018, arXiv: 1710.09412 version: 2. [Online]. Available: http://arxiv.org/abs/1710.09412
  25. “Albumentations,” May 2022, original-date: 2018-06-06T03:10:50Z. [Online]. Available: https://github.com/albumentations-team/albumentations
  26. S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,” arXiv:2001.05566 [cs], Nov. 2020, arXiv: 2001.05566 version: 5. [Online]. Available: http://arxiv.org/abs/2001.05566
  27. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” arXiv:1703.06870 [cs], Jan. 2018, arXiv: 1703.06870 version: 3. [Online]. Available: http://arxiv.org/abs/1703.06870
  28. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated Residual Transformations for Deep Neural Networks,” arXiv:1611.05431 [cs], Apr. 2017, arXiv: 1611.05431 version: 2. [Online]. Available: http://arxiv.org/abs/1611.05431
  29. W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, and L. Shao, “Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions,” arXiv:2102.12122 [cs], Aug. 2021, arXiv: 2102.12122. [Online]. Available: http://arxiv.org/abs/2102.12122
Citations (3)

Summary

We haven't generated a summary for this paper yet.