English

中国农机化学报

中国农机化学报 ›› 2023, Vol. 44 ›› Issue (10): 217-223.DOI: 10.13733/j.jcam.issn.2095-5553.2023.10.031

• 农业智能化研究 • 上一篇    下一篇

基于U-Net的肉鸽养殖场机器人视觉导航路径识别方法

朱立学1, 2,莫冬炎3,官金炫1,张世昂2,王梓颖4,黄伟锋5   

  1. 1. 仲恺农业工程学院机电工程学院,广州市,510225; 2. 广东省林果机器人共性关键技术研发创新团队,
    广州市,510225; 3. 广东中烟工业有限责任公司湛江卷烟厂,广东湛江,524000;  4. 梅州市金绿现代
    农业发展有限公司,广东梅州,514000; 5. 仲恺农业工程学院自动化学院,广州市,510225
  • 出版日期:2023-10-15 发布日期:2023-11-09
  • 基金资助:
    广州市重点研发科技计划项目(2023B03J0862)

Path recognition method of robot visual navigation for meat pigeon breeding farm based on U-Net

Zhu Lixue1, 2, Mo Dongyan3, Guan Jinxuan1, Zhang Shiang2, Wang Ziying4, Huang Weifeng5   

  • Online:2023-10-15 Published:2023-11-09

摘要: 针对视觉导航在室内养殖场环境中面临的道路主干部分与背景环境难区分、鸽毛鸽粪的干扰等问题,提出一种基于U-Net网络的肉鸽养殖场视觉导航路径识别方法。采用标图软件制作道路数据集、选择相适应的全卷积神经网络训练数据集,生成语义分割模型。根据生成的分割掩码区域获取边缘点像素坐标,采用等比例导航法获取拟合导航点,通过最小二乘法拟合直线原理对导航点进行拟合生成导航线。试验结果表明:生成语义分割最优模型训练的测试集准确率、交并比和召回率分别为98.48%、96.21%和99.05%。当道路宽度为1m时,在不同光照和路况条件下,自主导航的横向标准偏差平均值和横向绝对偏差平均值为0.019 5m、0.027m,航向标准偏差平均值和航向绝对偏差平均值为1.657°、1.95°。在鸽舍实地环境中,本方法可实现图像中道路的有效分割,为养殖场机器人视觉导航路径识别提供技术方案。

关键词: 肉鸽养殖场, 机器人, U-Net, 视觉导航, 语义分割, 直线拟合

Abstract: In order to address the issues of difficulty for visual navigation in indoor breeding farm in distinguishing between the main road and background environments, as well as interference from pigeon feathers and feces, a UNet networkbased visual navigation path recognition method for meat pigeon breeding farms is proposed. The road datasets are created by using mapping software, and the corresponding training datasets of convolutional neural networks are selected to generate semantic segmentation models. The pixel coordinates of the edge points are obtained based on the generated segmentation mask area,  and the fitting navigation points are obtained by the equal proportion navigation method, and the navigation lines are formed by using the least squares fitting line principle to fit the navigation points. The experimental results show that the accuracy, intersection to union ratio, and recall of the test set trained for generating the optimal semantic segmentation model are 98.48%, 96.21%, and 99.05%, respectively. When the road width is 1 m, under different lighting and road conditions, the average lateral standard deviation and lateral absolute deviation of autonomous navigation are 0.019 5 m and 0.027 m, and the average heading standard deviation and heading absolute deviation are  1.657° and 1.95°. In the field environment of a pigeon house, this method can achieve the effective segmentation of roads in images, providing a technical solution for visual navigation path recognition of breeding farm robots.

Key words: meat pigeon breeding farm, robots, UNet, visual navigation, semantic segmentation, straight line fitting

中图分类号: