English

Journal of Chinese Agricultural Mechanization

Journal of Chinese Agricultural Mechanization ›› 2025, Vol. 46 ›› Issue (4): 114-119.DOI: 10.13733/j.jcam.issn.2095-5553.2025.04.017

• Research on Agricultural Intelligence • Previous Articles     Next Articles

Research on bamboo shoot segmentation method based on YOLOv8

Zheng Zhenhui1, 2, 3, Zhang Danran4, Wang Shuo1, 2, 3, Wei Lijiao1, 2, 3, Wu Haiyun1, Xiao Jingfeng1   

  1. (1. Institute of Agricultural Machinery, Chinese Academy of Tropical Agricultural Sciences, Zhanjiang, 524091, China; 2. Key Laboratory of Agricultural Equipment for Tropical Crops, Ministry of Agriculture and Rural Affairs, Zhanjiang, 524091, China; 3. Guangdong Engineering Technology Research Center of Precision Emission Control for Agricultural Particulates, Zhanjiang, 524091, China; 4. College of Engineering, South China Agricultural University, Guangzhou, 510642, China)
  • Online:2025-04-15 Published:2025-04-18

基于YOLOv8的竹笋分割方法研究

郑镇辉1,2,3,张淡然4,王槊1,2,3,韦丽娇1,2,3,吴海韵1,肖景丰1   

  1. (1. 中国热带农业科学院农业机械研究所,广东湛江,524091; 2. 农业农村部热带作物农业装备
    重点实验室,广东湛江,524091; 3. 广东省农业类颗粒体精量排控工程技术研究中心,
    广东湛江,524091; 4. 华南农业大学工程学院,广州市,510642)
  • 基金资助:
    海南省自然科学基金(521QN310)

Abstract: The existing bamboo shoot accurate segmentation methods have high computational complexity and large model, which is not conducive to deployment on edge devices with limited computing power. To solve this problem, a lightweight method based on deep convolutional network is used to segment bamboo shoots. Firstly, a field experiment for bamboo shoot image acquisition was conducted, and a training and test dataset was constructed through manual annotation. Then, the YOLOv8 deep convolutional network model was employed as the baseline for bamboo shoot segmentation, followed by model training and validation. Finally, two comparative experiments were designed to verify the accuracy and efficiency of the proposed method. The results show that the proposed bamboo shoot segmentation method achieved a detection mAP of 0.927 and a segmentation mAP of 0.931 on the test dataset, with improvements of 3.6% and 5.7% over the YOLOv7 network, and 0.1% and 0.4% over the YOLOv8_x network, respectively. Additionally, the segmentation speed of this method reached 169 frames per second, with a parameter size of 43.8 M and a model size of 92.3 MB, representing a 71 frames per second increase over the YOLOv7 network and a 51 frames per second increase over the YOLOv8_x network, while reducing the parameter size by 24.6 M and the model size by 51.6 MB. In order to further validate the practical feasibility and efficiency of this method, the YOLOv8_l network was deployed on an NVIDIA edge device. After deployment, the bamboo shoot segmentation algorithm achieved a detection mAP of 0.896 and a segmentation mAP of 0.891 on the test dataset, with an inference time of only 0.091 4 seconds per image and a memory consumption of 12.2 GB.

Key words: bamboo shoots, deep convolutional network, segmentation, lightweight, YOLOv8

摘要: 现有的竹笋精准分割方法计算复杂度高,且模型庞大不利于在算力有限的边缘设备上部署。为解决这一问题,采用一种基于深度卷积网络的轻量化方法进行竹笋分割。首先,设计野外竹笋图像采集试验,通过人工标记构建训练集和测试集。然后,采用YOLOv8深度卷积网络模型作为竹笋分割的基础模型,并进行模型训练和验证。最后,设计两组对比试验,验证其精准性和高效性。结果表明,所设计的竹笋分割方法在测试集上的检测平均精度和分割平均精度分别为0.927、0.931,较YOLOv7模型分别提升3.6%、5.7%,较YOLOv8_x模型分别提升0.1%、0.4%。此外,所设计的分割方法速度为169帧/s,参数量为43.8 M,模型尺寸为92.3 MB,相比YOLOv7模型,分割速度提高71帧/s;相比YOLOv8_x模型,分割速度提高51帧/s,参数量减少24.6 M,模型尺寸减少51.6 MB。为进一步验证该方法的实际部署可行性和高效性,将YOLOv8_l模型部署到英伟达边缘设备上,部署后的竹笋分割算法在测试集上的检测平均精度均值和分割平均精度均值分别为0.896、0.891,单帧图像推理时间仅为0.091 4 s,所需显存为12.2 GB。

关键词: 竹笋, 深度卷积网络, 分割, 轻量化, YOLOv8

CLC Number: