Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effective Transfer of Pretrained Large Visual Model for Fabric Defect Segmentation via Specifc Knowledge Injection (2306.16186v1)

Published 28 Jun 2023 in cs.CV and cs.AI

Abstract: Fabric defect segmentation is integral to textile quality control. Despite this, the scarcity of high-quality annotated data and the diversity of fabric defects present significant challenges to the application of deep learning in this field. These factors limit the generalization and segmentation performance of existing models, impeding their ability to handle the complexity of diverse fabric types and defects. To overcome these obstacles, this study introduces an innovative method to infuse specialized knowledge of fabric defects into the Segment Anything Model (SAM), a large-scale visual model. By introducing and training a unique set of fabric defect-related parameters, this approach seamlessly integrates domain-specific knowledge into SAM without the need for extensive modifications to the pre-existing model parameters. The revamped SAM model leverages generalized image understanding learned from large-scale natural image datasets while incorporating fabric defect-specific knowledge, ensuring its proficiency in fabric defect segmentation tasks. The experimental results reveal a significant improvement in the model's segmentation performance, attributable to this novel amalgamation of generic and fabric-specific knowledge. When benchmarking against popular existing segmentation models across three datasets, our proposed model demonstrates a substantial leap in performance. Its impressive results in cross-dataset comparisons and few-shot learning experiments further demonstrate its potential for practical applications in textile quality control.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. M. Boluki and F. Mohanna, “Inspection of textile fabrics based on the optimal gabor filter,” Signal, Image and Video Processing, vol. 15, no. 7, pp. 1617–1625, 2021.
  2. C. Li, J. Li, Y. Li, L. He, X. Fu, and J. Chen, “Fabric defect detection in textile manufacturing: a survey of the state of the art,” Security and Communication Networks, vol. 2021, pp. 1–13, 2021.
  3. S. Zhao, L. Yin, J. Zhang, J. Wang, and R. Zhong, “Real-time fabric defect detection based on multi-scale convolutional neural network,” IET Collaborative Intelligent Manufacturing, vol. 2, no. 4, pp. 189–196, 2020.
  4. H. Y. Ngan, G. K. Pang, and N. H. Yung, “Automated fabric defect detection—a review,” Image and vision computing, vol. 29, no. 7, pp. 442–458, 2011.
  5. Z. Zhu, G. Han, G. Jia, and L. Shu, “Modified densenet for automatic fabric defect detection with edge computing for minimizing latency,” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9623–9636, 2020.
  6. P. Arora and M. Hanmandlu, “Detection of defects in fabrics using information set features in comparison with deep learning approaches,” The Journal of The Textile Institute, vol. 113, no. 2, pp. 266–272, 2022.
  7. P. Peng, Y. Wang, C. Hao, Z. Zhu, T. Liu, and W. Zhou, “Automatic fabric defect detection method using pran-net,” Applied Sciences, vol. 10, no. 23, p. 8434, 2020.
  8. Y. Li, W. Zhao, and J. Pan, “Deformable patterned fabric defect detection with fisher criterion-based deep learning,” IEEE Transactions on Automation Science and Engineering, vol. 14, no. 2, pp. 1256–1264, 2016.
  9. T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems,” arXiv preprint arXiv:1512.01274, 2015.
  10. H. Tian and F. Li, “Autoencoder-based fabric defect detection with cross-patch similarity,” in 2019 16th International Conference on Machine Vision Applications (MVA).   IEEE, 2019, pp. 1–6.
  11. Y. Huang, J. Jing, and Z. Wang, “Fabric defect segmentation method based on deep learning,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–15, 2021.
  12. Y. Kahraman and A. Durmuşoğlu, “Deep learning-based fabric defect detection: A review,” Textile Research Journal, vol. 93, no. 5-6, pp. 1485–1503, 2023.
  13. X. Ying, “An overview of overfitting and its solutions,” in Journal of physics: Conference series, vol. 1168.   IOP Publishing, 2019, p. 022022.
  14. A. Canatar, B. Bordelon, and C. Pehlevan, “Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks,” Nature communications, vol. 12, no. 1, p. 2914, 2021.
  15. A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
  16. X. Han, Z. Zhang, N. Ding, Y. Gu, X. Liu, Y. Huo, J. Qiu, Y. Yao, A. Zhang, L. Zhang et al., “Pre-trained models: Past, present and future,” AI Open, vol. 2, pp. 225–250, 2021.
  17. Y. Jing, X. Wang, and D. Tao, “Segment anything in non-euclidean domains: Challenges and opportunities,” arXiv preprint arXiv:2304.11595, 2023.
  18. R. Deng, C. Cui, Q. Liu, T. Yao, L. W. Remedios, S. Bao, B. A. Landman, L. E. Wheless, L. A. Coburn, K. T. Wilson et al., “Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging,” arXiv preprint arXiv:2304.04155, 2023.
  19. W. Ji, J. Li, Q. Bi, W. Li, and L. Cheng, “Segment anything is not always perfect: An investigation of sam on different real-world applications,” arXiv preprint arXiv:2304.05750, 2023.
  20. T. Yu, R. Feng, R. Feng, J. Liu, X. Jin, W. Zeng, and Z. Chen, “Inpaint anything: Segment anything meets image inpainting,” arXiv preprint arXiv:2304.06790, 2023.
  21. L. Cheng, J. Yi, A. Chen, and Y. Zhang, “Fabric defect detection based on separate convolutional unet,” Multimedia Tools and Applications, vol. 82, no. 2, pp. 3101–3122, 2023.
  22. Z. Liu, J. Wang, C. Li, B. Li, and R. Yang, “Fabric defect detection using fully convolutional network with attention mechanism,” in Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition, 2019, pp. 134–140.
  23. J. Jing, Z. Wang, M. Rätsch, and H. Zhang, “Mobile-unet: An efficient convolutional neural network for fabric defect detection,” Textile Research Journal, vol. 92, no. 1-2, pp. 30–42, 2022.
  24. M. Kopaczka, M. Saggiomo, M. Güttler, K. Kielholz, and D. Merhof, “Detection and classification of faulty weft threads using both feature-based and deep convolutional machine learning methods,” in Pattern Recognition Applications and Methods: 7th International Conference, ICPRAM 2018, Funchal, Madeira, Portugal, January 16-18, 2018, Revised Selected Papers 7.   Springer, 2019, pp. 141–163.
  25. G. Sun, Z. Zhou, Y. Gao, Y. Xu, L. Xu, and S. Lin, “A fast fabric defect detection framework for multi-layer convolutional neural network based on histogram back-projection,” IEICE TRANSACTIONS on Information and Systems, vol. 102, no. 12, pp. 2504–2514, 2019.
  26. L. Rong-qiang, L. Ming-hui, S. Jia-chen, and L. Yi-bin, “Fabric defect detection method based on improved u-net,” in Journal of Physics: Conference Series, vol. 1948, no. 1.   IOP Publishing, 2021, p. 012160.
  27. W. Ouyang, B. Xu, J. Hou, and X. Yuan, “Fabric defect detection using activation layer embedded convolutional neural network,” IEEE Access, vol. 7, pp. 70 130–70 140, 2019.
  28. L. Shao, E. Zhang, Q. Ma, and M. Li, “Pixel-wise semisupervised fabric defect detection method combined with multitask mean teacher,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022.
  29. I. Koulali and M. T. Eskil, “Unsupervised textile defect detection using convolutional neural networks,” Applied Soft Computing, vol. 113, p. 107913, 2021.
  30. J. Liu, C. Wang, H. Su, B. Du, and D. Tao, “Multistage gan for fabric defect detection,” IEEE Transactions on Image Processing, vol. 29, pp. 3388–3400, 2019.
  31. S. Niu, B. Li, X. Wang, S. He, and Y. Peng, “Defect attention template generation cyclegan for weakly supervised surface defect segmentation,” Pattern Recognition, vol. 123, p. 108396, 2022.
  32. B. Li, Y. Zou, R. Zhu, W. Yao, J. Wang, and S. Wan, “Fabric defect segmentation system based on a lightweight gan for industrial internet of things,” Wireless Communications and Mobile Computing, vol. 2022, 2022.
  33. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  34. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  35. OpenAI, “Gpt-4 technical report,” 2023.
  36. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  37. A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning.   PMLR, 2021, pp. 8748–8763.
  38. T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401–4410.
  39. L. Tang, H. Xiao, and B. Li, “Can sam segment anything? when sam meets camouflaged object detection,” arXiv preprint arXiv:2304.04709, 2023.
  40. J. Ma and B. Wang, “Segment anything in medical images,” arXiv preprint arXiv:2304.12306, 2023.
  41. J. Wu, R. Fu, H. Fang, Y. Liu, Z. Wang, Y. Xu, Y. Jin, and T. Arbel, “Medical sam adapter: Adapting segment anything model for medical image segmentation,” arXiv preprint arXiv:2304.12620, 2023.
  42. R. Zhang, Z. Jiang, Z. Guo, S. Yan, J. Pan, H. Dong, P. Gao, and H. Li, “Personalize segment anything model with one shot,” arXiv preprint arXiv:2305.03048, 2023.
  43. J. Cen, Y. Wu, K. Wang, X. Li, J. Yang, Y. Pei, L. Kong, Z. Liu, and Q. Chen, “Sad: Segment any rgbd,” arXiv preprint arXiv:2305.14207, 2023.
  44. S. Mo and Y. Tian, “Av-sam: Segment anything model meets audio-visual localization and segmentation,” arXiv preprint arXiv:2305.01836, 2023.
  45. Z. Zhou, Z. Wu, R. Boutteau, F. Yang, and D. Ginhac, “Dsec-mos: Segment any moving object with moving ego vehicle,” arXiv preprint arXiv:2305.00126, 2023.
  46. R. Larsen, T. L. Villadsen, J. K. Mathiesen, K. M. Jensen, and E. D. Boejesen, “Np-sam: Implementing the segment anything model for easy nanoparticle segmentation in electron microscopy images,” 2023.
  47. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  48. E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” arXiv preprint arXiv:2106.09685, 2021.
  49. P. Pathirana, “Fabric stain dataset,” https://www.kaggle.com/priemshpathirana/fabric-stain-dataset, 2020, accessed: 2021-12-10.
  50. J. Silvestre-Blanes, T. Albero-Albero, I. Miralles, R. Pérez-Llorens, and J. Moreno, “A public fabric database for defect detection methods and results,” Autex Research Journal, vol. 19, no. 4, pp. 363–374, 2019.
  51. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  52. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  53. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4.   Springer, 2018, pp. 3–11.
  54. T. Fan, G. Wang, Y. Li, and H. Wang, “Ma-net: A multi-scale attention network for liver and tumor segmentation,” IEEE Access, vol. 8, pp. 179 656–179 665, 2020.
  55. A. Chaurasia and E. Culurciello, “Linknet: Exploiting encoder representations for efficient semantic segmentation,” in 2017 IEEE visual communications and image processing (VCIP).   IEEE, 2017, pp. 1–4.
  56. A. Kirillov, K. He, R. Girshick, and P. Dollár, “A unified architecture for instance and semantic segmentation,” 2017.
  57. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–2890.
  58. H. Li, P. Xiong, J. An, and L. Wang, “Pyramid attention network for semantic segmentation,” arXiv preprint arXiv:1805.10180, 2018.
  59. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  60. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 801–818.
Citations (3)

Summary

We haven't generated a summary for this paper yet.