[1] 邹何, 夏雪, 王粟萍, 等. 杏鲍菇相关研究进展及其产业开发现状[J]. 食品工业, 2019, 40(2): 276-283.
Zou He, Xia Xue, Wang Suping, et al. Research progress and industry development status of Pleurotus eryngii [J]. The Food Industry, 2019, 40(2): 276-283.
[2] 侯鹏帅, 刘玉乐, 宋欣, 等. 基于智能视觉识别的冬枣分选检测系统设计[J]. 中国农机化学报, 2020, 41(3): 109-114.
Hou Pengshuai, Liu Yule, Song Xin, et al. Sorting and detecting system design of winter jujube based on intelligent vision recognition [J]. Journal of Chinese Agricultural Mechanization, 2020, 41(3): 109-114.
[3] 邓立苗, 杜宏伟, 徐艳, 等. 基于机器视觉的马铃薯智能分选方法与实现[J]. 中国农机化学报, 2015, 36(5): 145-150.
Deng Limiao, Du Hongwei, Xu Yan, et al. Implementation of intelligent potato grading method based on vision [J]. Journal of Chinese Agricultural Mechanization, 2015, 36(5): 145-150.
[4] 何进荣, 石延新, 刘斌, 等. 基于DXNet模型的富士苹果外部品质分级方法研究[J]. 农业机械学报, 2021, 52(7): 379-385.
He Jinrong, Shi Yanxin, Liu Bin, et al. External quality grading method of Fuji apple based on deep learning [J]. Transactions of the Chinese Society of Agricultural Machinery, 2021, 52(7): 379-385.
[5] 王斌. 基于深度图像和深度学习的机器人抓取检测算法研究[D]. 杭州: 浙江大学, 2019.
Wang Bin. Research on robotic grasping detection based on depth image and deep learning [D]. Hangzhou: Zhejiang University, 2019.
[6] 夏晶, 钱堃, 马旭东, 等. 基于级联卷积神经网络的机器人平面抓取位姿快速检测[J]. 机器人, 2018, 40(6): 795-802.
Xia Jing, Qian Kun, Ma Xudong, et al. Fast planar grasp pose detection for robot based on cascaded deep Convolutional Neural Networks [J]. Robot, 2018, 40(6): 795-802.
[7] Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps [J]. The International Journal of Robotics Research, 2015, 34(4-5): 705-724.
[8] Pinto L, Gupta A. Supersizing selfsupervision: Learning to grasp from 50k tries and 700 robot hours [C]. 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016: 3406-3413.
[9] Mahler J, Liang J, Niyaz S, et al. Dexnet 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics [J]. arXiv Preprint arXiv:1703.09312, 2017.
[10] Zeng A, Song S, Yu K T, et al. Robotic pickandplace of novel objects in clutter with multiaffordance grasping and crossdomain image matching [J]. The International Journal of Robotics Research, 2022, 41(7): 690-705.
[11] Asif U, Tang J, Harrer S. GraspNet: An efficient convolutional neural network for realtime grasp detection for lowpowered devices [C]. IJCAI, 2018, 7: 4875-4882.
[12] Wang D, Liu C, Chang F, et al. Highperformance pixellevel grasp detection based on adaptive grasping and graspaware network [J]. IEEE Transactions on Industrial Electronics, 2021, 69(11): 11611-11621.
[13] 夏晶. 基于RGB-D和深度学习的机器人抓取检测[D]. 南京: 东南大学, 2019.
Xia Jing. Deeplearningbased robot grasp detection using RGB-D sensor [D]. Nanjing: Southeast University, 2019.
[14] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, realtime object detection [C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788.
[15] Liu Y, Lu B H, Peng J, et al. Research on the use of YOLOv5 object detection algorithm in mask wearing recognition [J]. World Scientific Research Journal, 2020, 6(11): 276-284.
[16] 闫彬, 樊攀, 王美茸, 等. 基于改进YOLOv5m的采摘机器人苹果采摘方式实时识别[J]. 农业机械学报, 2022, 53(9): 28-38, 59.
Yan Bin, Fan Pan, Wang Meirong, et al. Realtime apple picking pattern recognition for picking robot based on improved YOLOv5m [J]. Transactions of the Chinese Society of Agricultural Machinery, 2022, 53(9): 28-38, 59.
[17] 喻柏炜. 基于卷积神经网络YOLOv5模型的图表识别方法[D]. 南昌: 南昌大学, 2021.
Yu Bowei. Method of chart recognition based on convolutional neural network YOLOv5 model [D]. Nanchang: Nanchang University, 2021.
[18] Morrison D, Corke P, Leitner J. Learning robust, realtime, reactive robotic grasping [J]. The International Journal of Robotics Research, 2020, 39(2-3): 183-201.
|