Machine Learning for Detection and Severity Estimation of Sweetpotato Weevil Damage in Field and Lab Conditions

Machine Learning for Detection and Severity Estimation of Sweetpotato Weevil Damage in Field and Lab Conditions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Sweetpotato weevils (Cylas spp.) are considered among the most destructive pests impacting sweetpotato production, particularly in sub-Saharan Africa. Traditional methods for assessing weevil damage, predominantly relying on manual scoring, are labour-intensive, subjective, and often yield inconsistent results. These challenges significantly hinder breeding programs aimed at developing resilient sweetpotato varieties. This study introduces a computer vision-based approach for the automated evaluation of weevil damage in both field and laboratory contexts. In the field settings, we collected data to train classification models to predict root-damage severity levels, achieving a test accuracy of 71.43%. Additionally, we established a laboratory dataset and designed an object detection pipeline employing YOLO12, a leading real-time detection model. This methodology incorporated a two-stage laboratory pipeline that combined root segmentation with a tiling strategy to improve the detectability of small objects. The resulting model demonstrated a mean average precision of 77.7% in identifying minute weevil feeding holes. Our findings indicate that computer vision technologies can provide efficient, objective, and scalable assessment tools that align seamlessly with contemporary breeding workflows. These advancements represent a significant improvement in enhancing phenotyping efficiency within sweetpotato breeding programs and play a crucial role in mitigating the detrimental effects of weevils on food security.


💡 Research Summary

The paper addresses a critical bottleneck in sweetpotato breeding programs in sub‑Saharan Africa: the labor‑intensive, subjective, and often inconsistent manual scoring of weevil (Cylas spp.) damage. To overcome this, the authors develop two complementary computer‑vision pipelines—one for field‑level severity classification and another for laboratory‑level detection of individual feeding holes.

Field data were collected from four Ugandan sites across two growing seasons, yielding 356 video recordings and over 450 still images of harvested roots. Experts assigned damage scores using a modified 1‑9 scale reduced to five classes (1, 3, 5, 7, 9). Because of severe class imbalance, frames were extracted from videos to augment under‑represented classes, resulting in a final dataset of 788 images representing 636 plots. Several convolutional neural networks (e.g., ResNet‑50, EfficientNet‑B3) were fine‑tuned with extensive data augmentation. The best model achieved 71.43 % test accuracy on the five‑class problem. Error analysis showed relatively reliable discrimination between low‑damage (1, 3) and high‑damage (7, 9) plots, while intermediate classes were more often confused, reflecting the subtle visual differences and variability in lighting and root orientation.

For the laboratory component, the authors built a two‑stage pipeline. First, a U‑Net‑style segmentation model isolates the root region, reducing background clutter. Second, the segmented images are sliced into 512 × 512 px tiles and fed to a YOLOv12 detector. The tiling strategy, implemented via the SAHI (Slicing Aided Hyper Inference) framework, dramatically improves recall for tiny feeding holes that would otherwise be missed in full‑resolution inference. The detection model achieved a mean average precision (mAP) of 77.7 % for the small‑object task. To ensure deployment on resource‑constrained devices, the model was pruned to under 10 M parameters and converted to TensorRT, enabling near‑real‑time inference on modern smartphones (≈30 fps).

The authors compare the automated pipelines with traditional expert scoring, estimating an 80 % reduction in labor and time while maintaining comparable accuracy. They also discuss limitations: reliance on expert‑generated labels introduces potential bias; field images suffer from variable illumination and occlusion; and the current classification approach treats severity as discrete rather than continuous, limiting fine‑grained phenotyping.

Future work is outlined, including incorporation of multispectral or thermal imagery to capture early‑stage damage, domain‑adaptation techniques to improve model robustness across locations and varieties, and regression‑based models for continuous severity estimation. The authors envision integrating the lightweight models into a mobile application that provides instant feedback to breeders and extension workers, with cloud‑based data aggregation to support large‑scale breeding decisions.

In conclusion, the study demonstrates that modern deep‑learning methods—when carefully adapted to the constraints of agricultural phenotyping—can deliver objective, scalable, and field‑deployable tools for assessing weevil damage. This represents a significant step toward accelerating sweetpotato breeding for weevil resistance and, ultimately, enhancing food security in vulnerable regions.


Comments & Academic Discussion

Loading comments...

Leave a Comment