Enhanced Droplet Analysis Using Generative Adversarial Networks (2402.15909v3)
Abstract: Precision devices play an important role in enhancing production quality and productivity in agricultural systems. Therefore, the optimization of these devices is essential in precision agriculture. Recently, with the advancements of deep learning, there have been several studies aiming to harness its capabilities for improving spray system performance. However, the effectiveness of these methods heavily depends on the size of the training dataset, which is expensive and time-consuming to collect. To address the challenge of insufficient training samples, we developed an image generator named DropletGAN to generate images of droplets. The DropletGAN model is trained by using a small dataset captured by a high-speed camera and capable of generating images with progressively increasing resolution. The results demonstrate that the model can generate high-quality images with the size of 1024x1024. The generated images from the DropletGAN are evaluated using the Fr\'echet inception distance (FID) with an FID score of 11.29. Furthermore, this research leverages recent advancements in computer vision and deep learning to develop a light droplet detector using the synthetic dataset. As a result, the detection model achieves a 16.06% increase in mean average precision (mAP) when utilizing the synthetic dataset. To the best of our knowledge, this work stands as the first to employ a generative model for augmenting droplet detection. Its significance lies not only in optimizing nozzle design for constructing efficient spray systems but also in addressing the common challenge of insufficient data in various precision agriculture tasks. This work offers a critical contribution to conserving resources while striving for optimal and sustainable agricultural practices.
- B. Balewski, B. Heine, and C. Tropea, “Experimental investigation of the correlation between nozzle flow and spray using laser doppler velocimeter, phase doppler system, high-speed photography, and x-ray radiography,” Atomization and Sprays, vol. 20, no. 1, 2010.
- T. Kawaguchi, Y. Akasaka, and M. Maeda, “Size measurements of droplets and bubbles by advanced interferometric laser imaging technique,” Measurement Science and Technology, vol. 13, no. 3, p. 308, 2002.
- B. Schmandt and H. Herwig, “Diffuser and nozzle design optimization by entropy generation minimization,” Entropy, vol. 13, no. 7, pp. 1380–1402, 2011.
- C. Goddeeris, F. Cuppo, H. Reynaers, W. Bouwman, and G. Van den Mooter, “Light scattering measurements on microemulsions: estimation of droplet sizes,” International journal of pharmaceutics, vol. 312, no. 1-2, pp. 187–195, 2006.
- P. Acharya, T. Burgers, and K.-D. Nguyen, “Ai-enabled droplet detection and tracking for agricultural spraying systems,” Computers and Electronics in Agriculture, vol. 202, p. 107325, 2022.
- L. Wang, W. Song, Y. Lan, H. Wang, X. Yue, X. Yin, E. Luo, B. Zhang, Y. Lu, and Y. Tang, “A smart droplet detection approach with vision sensing technique for agricultural aviation application,” IEEE Sensors Journal, vol. 21, no. 16, pp. 17 508–17 516, 2021.
- A. M. Ozbayoglu, M. U. Gudelek, and O. B. Sezer, “Deep learning for financial applications: A survey,” Applied Soft Computing, vol. 93, p. 106384, 2020.
- T.-H. Pham, X. Li, and K.-D. Nguyen, “Seunet-trans: A simple yet effective unet-transformer model for medical image segmentation,” arXiv preprint arXiv:2310.09998, 2023.
- P. Bhardwaj, P. Gupta, H. Panwar, M. K. Siddiqui, R. Morales-Menendez, and A. Bhaik, “Application of deep learning on student engagement in e-learning environments,” Computers & Electrical Engineering, vol. 93, p. 107277, 2021.
- T.-H. Pham, P. Acharya, S. Bachina, K. Osterloh, and K.-D. Nguyen, “Deep-learning framework for optimal selection of soil sampling sites,” Computers and Electronics in Agriculture, vol. 217, p. 108650, 2024.
- Y. Tian, G. Yang, Z. Wang, H. Wang, E. Li, and Z. Liang, “Apple detection during different growth stages in orchards using the improved yolo-v3 model,” Computers and electronics in agriculture, vol. 157, pp. 417–426, 2019.
- E. Hamuda, B. Mc Ginley, M. Glavin, and E. Jones, “Improved image processing-based crop detection using kalman filtering and the hungarian algorithm,” Computers and electronics in agriculture, vol. 148, pp. 37–44, 2018.
- P. Acharya, T. Burgers, and K.-D. Nguyen, “A deep-learning framework for spray pattern segmentation and estimation in agricultural spraying systems,” Scientific Reports, vol. 13, no. 1, pp. 1–14, 2023.
- C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of big data, vol. 6, no. 1, pp. 1–48, 2019.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
- A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
- T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” Advances in neural information processing systems, vol. 29, 2016.
- M.-Y. Liu and O. Tuzel, “Coupled generative adversarial networks,” Advances in neural information processing systems, vol. 29, 2016.
- J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
- T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
- T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401–4410.
- A. Karnewar and O. Wang, “Msg-gan: Multi-scale gradients for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 7799–7808.
- D. Ward, P. Moghadam, and N. Hudson, “Deep leaf segmentation using synthetic data,” arXiv preprint arXiv:1807.10931, 2018.
- B. Espejo-Garcia, N. Mylonas, L. Athanasakos, E. Vali, and S. Fountas, “Combining generative adversarial networks and agricultural transfer learning for weeds identification,” Biosystems Engineering, vol. 204, pp. 79–89, 2021.
- Y. Toda, F. Okura, J. Ito, S. Okada, T. Kinoshita, H. Tsuji, and D. Saisho, “Training instance segmentation neural network with synthetic datasets for crop seed phenotyping,” Communications biology, vol. 3, no. 1, p. 173, 2020.
- M. Valerio Giuffrida, H. Scharr, and S. A. Tsaftaris, “Arigan: Synthetic arabidopsis plants using generative adversarial network,” in Proceedings of the IEEE international conference on computer vision workshops, 2017, pp. 2064–2071.
- I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” Advances in neural information processing systems, vol. 30, 2017.
- G. Jocher, A. Chaurasia, and J. Qiu, “YOLO by Ultralytics,” Jan. 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
- J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
- Q. Xu, G. Huang, Y. Yuan, C. Guo, Y. Sun, F. Wu, and K. Weinberger, “An empirical study on evaluation metrics of generative adversarial networks,” arXiv preprint arXiv:1806.07755, 2018.
- M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, “Are gans created equal? a large-scale study,” Advances in neural information processing systems, vol. 31, 2018.