Capacity Constraint Analysis Using Object Detection for Smart Manufacturing (2402.00243v1)
Abstract: The increasing popularity of Deep Learning (DL) based Object Detection (OD) methods and their real-world applications have opened new venues in smart manufacturing. Traditional industries struck by capacity constraints after Coronavirus Disease (COVID-19) require non-invasive methods for in-depth operations' analysis to optimize and increase their revenue. In this study, we have initially developed a Convolutional Neural Network (CNN) based OD model to tackle this issue. This model is trained to accurately identify the presence of chairs and individuals on the production floor. The identified objects are then passed to the CNN based tracker, which tracks them throughout their life cycle in the workstation. The extracted meta-data is further processed through a novel framework for the capacity constraint analysis. We identified that the Station C is only 70.6% productive through 6 months. Additionally, the time spent at each station is recorded and aggregated for each object. This data proves helpful in conducting annual audits and effectively managing labor and material over time.
- O. Causa, M. Abendschein, N. Luu, E. Soldani, and C. Soriolo, “The post-covid-19 rise in labour shortages,” OECD Economics Department Working Papers, no. 1721, 2022. [Online]. Available: https://www.oecd-ilibrary.org/content/paper/e60c2d1c-en
- S. C. Government of Canada, “Analysis on labour challenges in canada, second quarter of 2023,” Jun. 2023, last Modified: 2023-06-22. [Online]. Available: https://www150.statcan.gc.ca/n1/pub/11-621-m/11-621-m2023009-eng.htm
- L.-A. Bomal, “Labour shortage cfib.” [Online]. Available: https://www.cfib-fcei.ca/en/labour-shortage
- R. Gervasi, F. Barravecchia, L. Mastrogiacomo, and F. Franceschini, “Applications of affective computing in human-robot interaction: State-of-art and challenges for manufacturing,” Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, vol. 237, no. 6-7, pp. 815–832, May 2023. [Online]. Available: http://journals.sagepub.com/doi/10.1177/09544054221121888
- L. Poudel, S. Elagandula, W. Zhou, and Z. Sha, “Decentralized and centralized planning for multi-robot additive manufacturing,” Journal of Mechanical Design, vol. 145, no. 1, p. 012003, 2023, publisher: American Society of Mechanical Engineers. [Online]. Available: https://asmedigitalcollection.asme.org/mechanicaldesign/article-abstract/145/1/012003/1146431
- L. Liu, Z. Zou, and R. L. Greene, “The effects of type and form of collaborative robots in manufacturing on trustworthiness, risk perceived, and acceptance,” International Journal of Human–Computer Interaction, pp. 1–14, Feb. 2023. [Online]. Available: https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2169527
- R. Pansara, “From fields to factories a technological odyssey in agtech and manufacturing,” International Journal of Managment Education for Sustainable Development, vol. 6, no. 6, pp. 1–12, 2023. [Online]. Available: https://ijsdcs.com/index.php/IJMESD/article/view/346
- H. M. Ahmad and A. Rahimi, “Deep learning methods for object detection in smart manufacturing: A survey,” Journal of Manufacturing Systems, vol. 64, pp. 181–196, Jul. 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0278612522001066
- S. Puttemans, T. Callemein, and T. Goedemé, “Building robust industrial applicable object detection models using transfer learning and single pass deep learning architectures,” in Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SCITEPRESS - Science and Technology Publications, 2018. [Online]. Available: http://dx.doi.org/10.5220/0006562002090217
- J. Wang, P. Fu, and R. X. Gao, “Machine vision intelligence for product defect inspection based on deep learning and hough transform,” Journal of Manufacturing Systems, vol. 51, pp. 52–60, Apr. 2019.
- J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jul. 2017, pp. 6517–6525.
- F. Farahnakian, L. Koivunen, T. Mäkilä, and J. Heikkonen, “Towards autonomous industrial warehouse inspection,” in 2021 26th International Conference on Automation and Computing (ICAC), Sep. 2021, pp. 1–6.
- T. Li, B. Huang, C. Li, and M. Huang, “Application of convolution neural network object detection algorithm in logistics warehouse,” The Journal of Engineering, vol. 2019, no. 23, pp. 9053–9058, 2019, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1049/joe.2018.9180. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1049/joe.2018.9180
- R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Jun. 2014, pp. 580–587.
- R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jun. 2016, pp. 779–788.
- J. Terven and D. Cordova-Esparza, “A comprehensive review of yolo: From yolov1 and beyond,” Oct. 2023, arXiv:2304.00501 [cs]. [Online]. Available: http://arxiv.org/abs/2304.00501
- J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” in arXiv preprint arXiv:1804.02767. ArXiv, 2018.
- T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jul. 2017, pp. 936–944.
- A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” ArXiv, vol. abs/2004.10934, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:216080778
- C.-Y. Wang, H.-Y. Mark Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun. 2020, pp. 1571–1580, iSSN: 2160-7516.
- S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” arXiv:1803.01534 [cs], Sep. 2018, arXiv: 1803.01534. [Online]. Available: http://arxiv.org/abs/1803.01534
- S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3–19. [Online]. Available: http://openaccess.thecvf.com/content_ECCV_2018/html/Sanghyun_Woo_Convolutional_Block_Attention_ECCV_2018_paper.html
- D. Misra, “Mish: A self regularized non-monotonic activation function,” arXiv:1908.08681 [cs, stat], Aug. 2020, arXiv: 1908.08681. [Online]. Available: http://arxiv.org/abs/1908.08681
- Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bounding box regression,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, 2020, pp. 12 993–13 000, issue: 07. [Online]. Available: https://aaai.org/ojs/index.php/AAAI/article/view/6999
- G. Jocher, A. Stoken, J. Borovec, NanoCode012, ChristopherSTAN, L. Changyu, Laughing, tkianai, A. Hogan, lorenzomammana, yxNONG, AlexWang1900, L. Diaconu, Marc, wanghaoyang0106, ml5ah, Doug, F. Ingham, Frederik, Guilhen, Hatovix, J. Poznanski, J. Fang, L. Y. , changyu98, M. Wang, N. Gupta, O. Akhtar, PetrDvoracek, and P. Rai, “ultralytics/yolov5: v3.1 - bug fixes and performance improvements,” Oct. 2020. [Online]. Available: https://zenodo.org/record/4154370/export/hx
- S. Elfwing, E. Uchibe, and K. Doya, “Sigmoid-weighted linear units for neural network function approximation in reinforcement learning,” Nov. 2017, arXiv:1702.03118 [cs]. [Online]. Available: http://arxiv.org/abs/1702.03118
- C. Li, L. Li, H. Jiang, K. Weng, Y. Geng, L. Li, Z. Ke, Q. Li, M. Cheng, W. Nie, Y. Li, B. Zhang, Y. Liang, L. Zhou, X. Xu, X. Chu, X. Wei, and X. Wei, “Yolov6: A single-stage object detection framework for industrial applications,” Sep. 2022, arXiv:2209.02976 [cs]. [Online]. Available: http://arxiv.org/abs/2209.02976
- C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 7464–7475. [Online]. Available: http://openaccess.thecvf.com/content/CVPR2023/html/Wang_YOLOv7_Trainable_Bag-of-Freebies_Sets_New_State-of-the-Art_for_Real-Time_Object_Detectors_CVPR_2023_paper.html
- C.-Y. Wang, H.-Y. M. Liao, and I.-H. Yeh, “Designing network design strategies through gradient path analysis,” Nov. 2022, arXiv:2211.04800 [cs]. [Online]. Available: http://arxiv.org/abs/2211.04800
- X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, “Repvgg: Making vgg-style convnets great again,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 13 733–13 742. [Online]. Available: http://openaccess.thecvf.com/content/CVPR2021/html/Ding_RepVGG_Making_VGG-Style_ConvNets_Great_Again_CVPR_2021_paper.html
- G. Jocher, A. Chaurasia, and J. Qiu, “Yolo by ultralytics,” URL: https://github. com/ultralytics/ultralytics, 2023. [Online]. Available: https://scholar.google.com/scholar?cluster=66001205780317853&hl=en&oi=scholarr
- Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series in 2021,” arXiv:2107.08430 [cs], Aug. 2021, arXiv: 2107.08430. [Online]. Available: http://arxiv.org/abs/2107.08430
- X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, and J. Yang, “Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 002–21 012, 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/f0bda020d2470f2e74990a07a607ebd9-Abstract.html
- I. J. Good, “Rational decisions,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 14, no. 1, pp. 107–114, 1952, publisher: Wiley Online Library.
- R. King, “Brief summary of yolov8 model structure - issue #189,” 2023, publication Title: GitHub repository. [Online]. Available: https://github.com/ultralytics/ultralytics/issues/189
- N. Zendehdel, H. Chen, and M. C. Leu, “Real-time tool detection in smart manufacturing using you-only-look-once (yolo)v5,” Manufacturing Letters, vol. 35, pp. 1052–1059, Aug. 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2213846323001190
- M. Liu, Y. Chen, J. Xie, L. He, and Y. Zhang, “Lf-yolo: A lighter and faster yolo for weld defect detection of x-ray image,” IEEE Sensors Journal, vol. 23, no. 7, pp. 7430–7439, 2023, publisher: IEEE. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/10054502/
- J. Wang, H. Dai, T. Chen, H. Liu, X. Zhang, Q. Zhong, and R. Lu, “Toward surface defect detection in electronics manufacturing by an accurate and lightweight yolo-style object detector,” Scientific Reports, vol. 13, no. 1, p. 7062, 2023, publisher: Nature Publishing Group UK London. [Online]. Available: https://www.nature.com/articles/s41598-023-33804-w
- C. Zhao, X. Shu, X. Yan, X. Zuo, and F. Zhu, “Rdd-yolo: A modified yolo for detection of steel surface defects,” Measurement, vol. 214, p. 112776, 2023, publisher: Elsevier. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0263224123003408
- T.-T.-H. Vu, D.-L. Pham, and T.-W. Chang, “A yolo-based real-time packaging defect detection system,” Procedia Computer Science, vol. 217, pp. 886–894, Jan. 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1877050922023638
- Z. Zhao, X. Yang, Y. Zhou, Q. Sun, Z. Ge, and D. Liu, “Real-time detection of particleboard surface defects based on improved yolov5 target detection,” Scientific Reports, vol. 11, no. 1, p. 21777, Dec. 2021. [Online]. Available: https://www.nature.com/articles/s41598-021-01084-x
- J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Jun. 2018, pp. 7132–7141.
- A. Rahimi, M. Anvaripour, and K. Hayat, “Object detection using deep learning in a manufacturing plant to improve manual inspection,” in 2021 IEEE International Conference on Prognostics and Health Management (ICPHM), Jun. 2021, pp. 1–7.
- H. M. Ahmad, A. Rahimi, and K. Hayat, “Deep learning transforming the manufacturing industry: A case study,” in 2021 IEEE 23rd Int Conf on High Performance Computing &; Communications; 7th Int Conf on Data Science &; Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud &; Big Data Systems &; Application (HPCC/DSS/SmartCity/DependSys). IEEE, Dec. 2021. [Online]. Available: http://dx.doi.org/10.1109/HPCC-DSS-SmartCity-DependSys53884.2021.00196
- Z. Liu and K. Ye, “Yolo-imf: An improved yolov8 algorithm for surface defect detection in industrial manufacturing field,” in Metaverse – METAVERSE 2023, ser. Lecture Notes in Computer Science, S. He, J. Lai, and L.-J. Zhang, Eds. Cham: Springer Nature Switzerland, 2023, pp. 15–28.
- B. Luo, Z. Kou, C. Han, and J. Wu, “A “hardware-friendly” foreign object identification method for belt conveyors based on improved yolov8,” Applied Sciences, vol. 13, no. 20, p. 11464, Jan. 2023, number: 20 Publisher: Multidisciplinary Digital Publishing Institute. [Online]. Available: https://www.mdpi.com/2076-3417/13/20/11464
- N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116–131. [Online]. Available: http://openaccess.thecvf.com/content_ECCV_2018/html/Ningning_Light-weight_CNN_Architecture_ECCV_2018_paper.html
- G. Krummenacher, C. S. Ong, S. Koller, S. Kobayashi, and J. M. Buhmann, “Wheel defect detection with machine learning,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 4, pp. 1176–1187, Apr. 2018.
- K. O’Brien and J. Humphries, “Object detection using convolutional neural networks for smart manufacturing vision systems in the medical devices sector,” Procedia Manufacturing, vol. 38, pp. 142–147, 2019.
- H. Wei, M. Laszewski, and N. Kehtarnavaz, “Deep learning-based person detection and classification for far field video surveillance,” in 2018 IEEE 13th Dallas Circuits and Systems Conference (DCAS). IEEE, Nov. 2018, pp. 1–4.
- C. Luo, L. Yu, E. Yang, H. Zhou, and P. Ren, “A benchmark image dataset for industrial tools,” Pattern Recognition Letters, vol. 125, pp. 341–348, Jul. 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167865519301606
- T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft coco: Common objects in context,” Feb. 2015, arXiv:1405.0312 [cs]. [Online]. Available: http://arxiv.org/abs/1405.0312
- A. Neubeck and L. Van Gool, “Efficient non-maximum suppression,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, Aug. 2006, pp. 850–855, iSSN: 1051-4651.
- I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, Aug. 2016, publisher: International Conference on Learning Representations, ICLR. [Online]. Available: https://arxiv.org/abs/1608.03983v5
- N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” Proceedings - International Conference on Image Processing, ICIP, vol. 2017-Septe, pp. 3645–3649, Mar. 2017, publisher: IEEE Computer Society. [Online]. Available: https://arxiv.org/abs/1703.07402v1
- C. K. Chui and G. Chen, “Kalman filtering with real-time applications,” Springer Series in Information Sciences, 1987. [Online]. Available: http://dx.doi.org/10.1007/978-3-662-02508-6
- Y. You, I. Gitman, and B. Ginsburg, “Large batch training of convolutional networks,” Sep. 2017, arXiv:1708.03888 [cs] version: 3. [Online]. Available: http://arxiv.org/abs/1708.03888
- N. Bjorck, C. P. Gomes, B. Selman, and K. Q. Weinberger, “Understanding batch normalization,” in Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc., 2018. [Online]. Available: https://proceedings.neurips.cc/paper/2018/hash/36072923bfc3cf47745d704feb489480-Abstract.html
- D. Powers, “Evaluation: From precision, recall and f-measure to roc, informedness, markedness and correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, 2011.