Papers
Topics
Authors
Recent
Search
2000 character limit reached

Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings

Published 23 Dec 2024 in cs.CV, cs.AI, and cs.CY | (2412.18635v1)

Abstract: This research paper presents the development of a lightweight and efficient computer vision pipeline aimed at assisting farmers in detecting orange diseases using minimal resources. The proposed system integrates advanced object detection, classification, and segmentation models, optimized for deployment on edge devices, ensuring functionality in resource-limited environments. The study evaluates the performance of various state-of-the-art models, focusing on their accuracy, computational efficiency, and generalization capabilities. Notable findings include the Vision Transformer achieving 96 accuracy in orange species classification and the lightweight YOLOv8-S model demonstrating exceptional object detection performance with minimal computational overhead. The research highlights the potential of modern deep learning architectures to address critical agricultural challenges, emphasizing the importance of model complexity versus practical utility. Future work will explore expanding datasets, model compression techniques, and federated learning to enhance the applicability of these systems in diverse agricultural contexts, ultimately contributing to more sustainable farming practices.

Summary

  • The paper proposes and evaluates lightweight computer vision models (classification, segmentation, detection) optimized for detecting orange diseases on resource-constrained edge devices.
  • Key findings highlight Vision Transformer's 96% accuracy for classification, LinkNet's 0.9039 IoU for segmentation, and YOLOv8-S's 0.949 mAP50 with high efficiency (6MB, 10.9ms/image) for object detection.
  • This research demonstrates the feasibility of deploying advanced AI models for agricultural disease detection in low-resource settings, even with limited training data.

The paper "Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings" addresses the challenges of deploying deep learning solutions in agricultural contexts, particularly in scenarios constrained by limited computational resources. This research proposes a pipeline integrating object detection, classification, and segmentation models designed for deployment on edge devices, aimed specifically at detecting diseases in oranges.

Technical Approach

The research outlines a comprehensive methodology encompassing model selection, training, and evaluation for various tasks:

  1. Dataset Preparation: The dataset includes images of five orange species—Tangerine, Navel, Blood Oranges, Bergamot, Tangelo—and diseases such as Citrus Canker and Greening. The images, captured under diverse conditions, are annotated for object detection, classification, and segmentation tasks.
  2. Model Selection and Training:
    • Classification: Various models such as MobileNet V3, Vision Transformer (ViT), DenseNet, and ResNet were considered. The Visions Transformer stood out with 96% accuracy.
    • Segmentation: Models like DeepLabV3, U-Net++, and LinkNet were employed, with LinkNet achieving the highest IoU of 0.9039.
    • Object Detection: The study utilized YOLOv8-S, RetinaNet, and DETR among others. YOLOv8-S demonstrated outstanding performance with an mAP50 of 0.949 and optimal inference speed.
  3. Optimization Techniques:
    • The models were fine-tuned using transfer learning with ImageNet pre-trained weights.
    • The training regimen incorporated techniques like Cross-Entropy Loss, SGD optimizer, ReduceLROnPlateau scheduler, and early stopping to improve model convergence and performance.
  4. Evaluation Metrics: Model efficacy was assessed using a variety of metrics, including accuracy, precision, recall, F1-score, IoU, and mean Average Precision (mAP), thereby providing a comprehensive evaluation of performance across tasks.

Results and Implications

  • ViT Model: The Vision Transformer was highlighted for its superior classification performance, achieving 96% accuracy and demonstrating robust multi-class classification capabilities with its large model size indicative of substantial computational overhead.
  • Efficiency of Lightweight Models: MobileNet V3 demonstrated strong performance with only a 24MB size, and YOLOv8-S's small model size of 6MB coupled with high speed (10.9ms/image) makes it particularly suitable for edge deployment, especially valuable in resource-constrained scenarios.
  • Segmentation: Despite LinkNet's size, its performance was optimal for segmentation tasks. The balance between model size and accuracy emphasizes the importance of model selection based on deployment constraints.

Discussion and Future Work

The research underscores the potential of deploying advanced deep learning architectures in real-world agriculture despite the challenges presented by limited data and computational resources. The ability of models to function effectively with as few as 50 training images advocates for their use in environments lacking extensive datasets.

Future directions include:

  • Utilizing larger and more diverse datasets to enhance model robustness.
  • Exploring federated learning to harness distributed datasets while maintaining data privacy.
  • Investigating model compression techniques to make high-performance models feasible for broader applications.
  • Incorporating temporal data which could further extend the functionality of these systems in agricultural monitoring.

Overall, this research highlights significant advancements in computer vision applications for agriculture, offering practical implementations for efficiently detecting and managing crop diseases in low-resource settings.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.