Papers
Topics
Authors
Recent
Search
2000 character limit reached

Developing an AI-based Integrated System for Bee Health Evaluation

Published 18 Jan 2024 in cs.LG, cs.CV, cs.SD, and eess.AS | (2401.09988v1)

Abstract: Honey bees pollinate about one-third of the world's food supply, but bee colonies have alarmingly declined by nearly 40% over the past decade due to several factors, including pesticides and pests. Traditional methods for monitoring beehives, such as human inspection, are subjective, disruptive, and time-consuming. To overcome these limitations, artificial intelligence has been used to assess beehive health. However, previous studies have lacked an end-to-end solution and primarily relied on data from a single source, either bee images or sounds. This study introduces a comprehensive system consisting of bee object detection and health evaluation. Additionally, it utilized a combination of visual and audio signals to analyze bee behaviors. An Attention-based Multimodal Neural Network (AMNN) was developed to adaptively focus on key features from each type of signal for accurate bee health assessment. The AMNN achieved an overall accuracy of 92.61%, surpassing eight existing single-signal Convolutional Neural Networks and Recurrent Neural Networks. It outperformed the best image-based model by 32.51% and the top sound-based model by 13.98% while maintaining efficient processing times. Furthermore, it improved prediction robustness, attaining an F1-score higher than 90% across all four evaluated health conditions. The study also shows that audio signals are more reliable than images for assessing bee health. By seamlessly integrating AMNN with image and sound data in a comprehensive bee health monitoring system, this approach provides a more efficient and non-invasive solution for the early detection of bee diseases and the preservation of bee colonies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Bush, G. How you can keep bees from becoming endangered. \JournalTitleThe Ohio State University (2020).
  2. Steinhauer, N. United states honey bee colony losses 2022–23: Preliminary results from the bee informed partnership. \JournalTitleBee Informed (2023).
  3. Haydak, M. H. Honey bee nutrition. \JournalTitleAnnual Review of Entomology 15, 143–156, DOI: https://doi.org/10.1016/j.cvfa.2021.06.006 (1970).
  4. Spiesman, B. J. et al. Assessing the potential for deep learning and computer vision to identify bumble bee species from images. \JournalTitleScientific Reports 11, 7580, DOI: https://doi.org/10.1038/s41598-021-87210-1 (2021).
  5. Artificial intelligence versus natural selection: Using computer vision techniques to classify bees and bee mimics. \JournalTitleiScience 25, DOI: https://doi.org/10.1016/j.isci.2022.104924 (2022).
  6. Sledevič, T. The application of convolutional neural network for pollen bearing bee classification. In 2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), 1–4, DOI: https://doi.org/10.1109/AIEEE.2018.8592464 (IEEE, 2018).
  7. Tracking individual honeybees among wildflower clusters with computer vision-facilitated pollinator monitoring. \JournalTitlePLoS One 16, e0258834, DOI: https://doi.org/10.1371/journal.pone.0239504 (2021).
  8. Bee tracker—an open-source machine learning-based video analysis software for the assessment of nesting and foraging performance of cavity-nesting solitary bees. \JournalTitleEcology and Evolution 12, e05778, DOI: https://doi.org/10.1002/ece3.8575 (2022).
  9. Liang, A. Effectiveness of transfer learning, convolutional neural network and standard machine learning in computer vision assisted bee health assessment. In 2022 International Communication Engineering and Cloud Computing Conference (CECCC), 7–11, DOI: https://doi.org/10.1109/CECCC56460.2022.10069892 (IEEE, 2022).
  10. A cnn-based identification of honeybees’ infection using augmentation. In 2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), 1–6, DOI: https://doi.org/10.1109/ICECCME55909.2022.9988275 (IEEE, 2022).
  11. Žgank, A. Acoustic monitoring and classification of bee swarm activity using mfcc feature extraction and hmm acoustic modeling. In 2018 ELEKTRO, 1–4, DOI: https://doi.org/10.1109/ELEKTRO.2018.8398253 (IEEE, 2018).
  12. Acoustic scene classification and visualization of beehive sounds using machine learning algorithms and grad-cam. \JournalTitleMathematical Problems in Engineering 2021, 1–13, DOI: https://doi.org/10.1155/2021/5594498 (2021).
  13. Use of lstm neural networks to identify ’queenlessness’ in honeybee hives from audio signals. In 17th International Conference on Intelligent Environments (IE), 20–24, DOI: https://doi.org/10.1109/IE51775.2021.9486575 (IEEE, 2021).
  14. Honey bee queen presence detection from audio field recordings using summarized spectrogram and convolutional neural networks. In 21st International Conference on Intelligent Systems Design and Applications, 83–92, DOI: https://doi.org/10.1007/978-3-030-96308-8_8 (Seattle, WA, 2021).
  15. Hong, W. et al. Long-term and extensive monitoring for bee colonies based on internet of things. \JournalTitleIEEE Internet of Things Journal 8, 3476–3487, DOI: https://doi.org/10.1109/JIOT.2020.2981681 (2020).
  16. Remote detection of the swarming of honey bee colonies by single-point temperature monitoring. \JournalTitleBiosystems Engineering 144, 87–94, DOI: https://doi.org/10.1016/j.biosystemseng.2016.05.012 (2016).
  17. Gil-Lebrero, S. et al. Honey bee colonies remote monitoring system. \JournalTitleSensors 17, 55, DOI: https://doi.org/10.3390/s17010055 (2017).
  18. Self-powered smart beehive monitoring and control system (sbmacs). \JournalTitleSensors 21, 3522, DOI: https://doi.org/10.3390/s21103522 (2021).
  19. Jocher, G. et al. ultralytics/yolov5: v3.0 (2020).
  20. Szegedy, C. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9, DOI: https://doi.org/10.1109/CVPR.2015.7298594 (2015).
  21. Mobilenets: Efficient convolutional neural networks for mobile vision applications. \JournalTitlearXiv preprint arXiv:1704.04861 DOI: https://doi.org/10.48550/arXiv.1704.04861 (2017).
  22. Very deep convolutional networks for large-scale image recognition. \JournalTitlearXiv preprint arXiv:1409.1556 DOI: https://doi.org/10.48550/arXiv.1409.1556 (2014).
  23. Long short-term memory. \JournalTitleNeural computation 9, 1735–1780, DOI: https://doi.org/10.1162/neco.1997.9.8.1735 (1997).

Summary

  • The paper presents an integrated system using an Attention-based Multimodal Neural Network (AMNN) that fuses visual data from YOLOv5 and audio features for robust bee health evaluation.
  • It employs detailed data acquisition from 25 beehives, utilizing image and audio preprocessing alongside feature extraction methods like MFCC and Chromagram.
  • The study demonstrates the method’s practical value by achieving 92.61% accuracy, thereby enhancing early detection of pesticide effects and supporting sustainable beekeeping practices.

Developing an AI-based Integrated System for Bee Health Evaluation

Introduction

The paper "Developing an AI-based Integrated System for Bee Health Evaluation" (2401.09988) introduces a comprehensive AI-driven approach targeting the enhancement of honey bee colony health monitoring. Honey bees are critical to global agriculture, yet their populations have experienced a drastic 40% decline over the last decade due to factors like pesticides and pests. Traditional beekeeping methods are often labor-intensive, subjective, and may disrupt the bees. The research leverages an Attention-based Multimodal Neural Network (AMNN) combining computer vision and signal processing to assess bee health using both visual and audio data, achieving remarkable accuracy.

Methods

Data Acquisition and Preprocessing

The data includes images and audio from 25 beehives at various locations in California, capturing bee activities over 150 gigabytes of video data. A Raspberry Pi camera and microphone are utilized for image and sound collection, ensuring high-resolution recordings necessary for detailed analysis.

Bee Object Detection

The bee images undergo preprocessing and are annotated using YOLO format for image detection models. Audio clips are similarly annotated, focusing on distinguishing bee sounds from background noise. Data augmentation techniques like blurring and transformation play a crucial role in improving model robustness.

Figure 1 illustrates the distinct audio feature patterns when bees are absent versus present, providing significant insights into behavior identification. Figure 1

Figure 1: The audio features show different patterns when bees are absent (left) vs. present (right). The top-to-bottom representation includes the original audio wave, Mel Spectrogram, MFCC, STFT, and Chromagram.

Feature Extraction

The study emphasizes audio feature extraction using Mel Spectrogram, MFCC, STFT, and Chromagram. These features effectively capture intricate details of bee sounds, offering valuable cues for behavioral analysis. The transformation process standardizes techniques for accurate sound feature recognition.

Model Development

The development incorporates multiple deep learning models for health assessment:

  1. YOLOv5 Model: A robust object detection tool applied to bee images, optimized for high precision in identifying and cropping bee-centric data (Figure 2). Figure 2

    Figure 2: Summary of YOLO v5 model performance

  2. Audio Object Detection Models: Four distinct 1D CNN models employ different audio features. The Chromagram model achieves the highest accuracy, emphasizing pitch dynamics as crucial for distinguishing bee sounds.
  3. Image Health Assessment Models: Implemented models like Inception v3, MobileNet v2, CNN, and VGG16, each displaying efficacy in health classification through bee images. VGG16 offers superior performance due to its feature map depth (Figure 3). Figure 4

    Figure 4: Compressed self-designed CNN structures for (A) audio object detection, (B) visual health assessment, and (C) audio health assessment.

    Figure 3

    Figure 3: Visualization of VGG16's feature maps, highlighting key regions of an input image across the convolutional layers.

  4. Attention-based Multimodal Neural Network (AMNN): This model merges visual and audio data via attention mechanisms, improving robustness and accuracy in health assessment. Detailed pseudocode and architecture are provided, showcasing dynamic feature weighting and integration for optimal evaluation.

Results

Performance Analysis

The AMNN model significantly improves bee health assessment accuracy, surpassing isolated visual and audio models with an accuracy of 92.61%. This integration method achieves high F1-scores across health conditions, enhancing predictive consistency during environmental stressors like pesticide exposure.

Training and Inference Times

The study reports marginal increases in processing time for AMNN, yet benefits in predictive accuracy justify this minor cost. Training and inference times across models are documented, highlighting computational efficiency even in multimodal approaches.

Discussion

The paper illuminates the essential role of integrating visual and audio data, where subtle cues from audio significantly enhance behavioral assessments, particularly in detecting pesticide exposure. Bee sounds offer more consistent indicators due to their reliance on sound-based communication methods. Future studies might broaden across species and environments to bolster bee health monitoring frameworks.

Conclusion

The research provides a novel system integrating AI with practical beekeeping strategies, contributing substantially to early detection of bee health issues. This comprehensive methodology offers beekeepers precise real-time monitoring tools for identifying emergent threats, thereby fostering sustainable conservation practices. The paper underlines the importance of advancing multimodal models in ecological applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.