Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Early and Accurate Detection of Tomato Leaf Diseases Using TomFormer (2312.16331v1)

Published 26 Dec 2023 in eess.IV, cs.AI, and cs.CV

Abstract: Tomato leaf diseases pose a significant challenge for tomato farmers, resulting in substantial reductions in crop productivity. The timely and precise identification of tomato leaf diseases is crucial for successfully implementing disease management strategies. This paper introduces a transformer-based model called TomFormer for the purpose of tomato leaf disease detection. The paper's primary contributions include the following: Firstly, we present a novel approach for detecting tomato leaf diseases by employing a fusion model that combines a visual transformer and a convolutional neural network. Secondly, we aim to apply our proposed methodology to the Hello Stretch robot to achieve real-time diagnosis of tomato leaf diseases. Thirdly, we assessed our method by comparing it to models like YOLOS, DETR, ViT, and Swin, demonstrating its ability to achieve state-of-the-art outcomes. For the purpose of the experiment, we used three datasets of tomato leaf diseases, namely KUTomaDATA, PlantDoc, and PlanVillage, where KUTomaDATA is being collected from a greenhouse in Abu Dhabi, UAE. Finally, we present a comprehensive analysis of the performance of our model and thoroughly discuss the limitations inherent in our approach. TomFormer performed well on the KUTomaDATA, PlantDoc, and PlantVillage datasets, with mean average accuracy (mAP) scores of 87%, 81%, and 83%, respectively. The comparative results in terms of mAP demonstrate that our method exhibits robustness, accuracy, efficiency, and scalability. Furthermore, it can be readily adapted to new datasets. We are confident that our work holds the potential to significantly influence the tomato industry by effectively mitigating crop losses and enhancing crop yields.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. B. Picó, M. J. Díez, and F. Nuez, “Viral diseases causing the greatest economic losses to the tomato crop. ii. the tomato yellow leaf curl virus—a review,” Scientia Horticulturae, vol. 67, no. 3-4, pp. 151–196, 1996.
  2. R. Wang, M. Lammers, Y. Tikunov, A. G. Bovy, G. C. Angenent, and R. A. de Maagd, “The rin, nor and cnr spontaneous mutations inhibit tomato fruit ripening in additive and epistatic manners,” Plant Science, vol. 294, p. 110436, 2020.
  3. T. Lin, Y. Wang, X. Liu, and X. Qiu, “A survey of transformers,” AI Open, vol. 3, pp. 111–132, 2022.
  4. A. A. Bharate and M. S. Shirdhonkar, “A review on plant disease detection using image processing,” 2017 International Conference on Intelligent Sustainable Systems (ICISS), pp. 103–109, 2017.
  5. M. E. El Houby, “A survey on applying machine learning techniques for management of diseases,” Journal of Applied Biomedicine, vol. 16, no. 3, pp. 165–174, 2018.
  6. M. Brahimi, K. Boukhalfa, and A. Moussaoui, “Deep learning for tomato diseases: classification and symptoms visualization,” Applied Artificial Intelligence, vol. 31, no. 4, pp. 299–315, 2017.
  7. X. Huang, A. Chen, G. Zhou, X. Zhang, J. Wang, N. Peng, N. Yan, and C. Jiang, “Tomato leaf disease detection system based on FC-SNDPN,” Multim. Tools Appl., vol. 82, no. 2, pp. 2121–2144, 2023.
  8. D. Hughes, M. Salathé, et al., “An open access repository of images on plant health to enable the development of mobile disease diagnostics,” arXiv preprint arXiv:1511.08060, 2015.
  9. S. Yu, L. Xie, and Q. Huang, “Inception convolutional vision transformers for plant disease identification,” Internet of Things, vol. 21, p. 100650, 2023.
  10. Y. Borhani, J. Khoramdel, and E. Najafi, “A deep learning based approach for automated plant disease classification using vision transformer,” Scientific Reports, vol. 12, no. 1, p. 11554, 2022.
  11. H. Alshammari, K. Gasmi, I. Ben Ltaifa, M. Krichen, L. Ben Ammar, and M. A. Mahmood, “Olive disease classification based on vision transformer and cnn models,” Computational Intelligence and Neuroscience, vol. 2022, 2022.
  12. R. Karthik, M. Hariharan, S. Anand, P. Mathikshara, A. Johnson, and R. Menaka, “Attention embedded residual cnn for disease detection in tomato leaves,” Applied Soft Computing, vol. 86, p. 105933, 2020.
  13. W. Zeng and M. Li, “Crop leaf disease recognition based on self-attention convolutional neural network,” Computers and Electronics in Agriculture, vol. 172, p. 105341, 2020.
  14. J. Zhou, J. Li, C. Wang, H. Wu, C. Zhao, and Q. Wang, “A vegetable disease recognition model for complex background based on region proposal and progressive learning,” Computers and Electronics in Agriculture, vol. 184, p. 106101, 2021.
  15. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  16. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European conference on computer vision, pp. 213–229, Springer, 2020.
  17. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
  18. A. Baevski and M. Auli, “Adaptive input representations for neural language modeling,” arXiv preprint arXiv:1809.10853, 2018.
  19. Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, “Learning deep transformer models for machine translation,” arXiv preprint arXiv:1906.01787, 2019.
  20. D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
  21. C. C. Kemp, A. Edsinger, H. M. Clever, and B. Matulevich, “The design of stretch: A compact, lightweight mobile manipulator for indoor human environments,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 3150–3157, IEEE, 2022.
  22. D. P. Hughes and M. Salathé, “An open access repository of images on plant health to enable the development of mobile disease diagnostics through machine learning and crowdsourcing,” CoRR, vol. abs/1511.08060, 2015.
  23. D. Singh, N. Jain, P. Jain, P. Kayal, S. Kumawat, and N. Batra, “Plantdoc: A dataset for visual plant disease detection,” in Proceedings of the 7th ACM IKDD CoDS and 25th COMAD, pp. 249–253, 2020.
  24. B. Dwyer, J. Nelson, and J. Solawetz, “Roboflow annotate.” https://roboflow.com/annotate, 2022.
  25. Y. Fang, B. Liao, X. Wang, J. Fang, J. Qi, R. Wu, J. Niu, and W. Liu, “You only look at one sequence: Rethinking transformer in vision through object detection,” Advances in Neural Information Processing Systems, vol. 34, pp. 26183–26197, 2021.
  26. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 10012–10022, 2021.
Citations (4)

Summary

We haven't generated a summary for this paper yet.