[ 1 ] 王瑜, 郑子辉. 人工智能技术在农业领域的应用方向及发展路径[J]. 信息通信技术与政策, 2019(6): 29-31.
[ 2 ] 陈全胜, 赵杰文, 蔡健荣, 等. 支持向量机在机器视觉识别茶叶中的应用研究[J]. 仪器仪表学报, 2006, 27(12): 1704-1706.
[ 3 ] 唐仙, 吴雪梅, 张富贵, 等. 基于阈值分割法的茶叶嫩芽识别研究[J]. 农业装备技术, 2013, 39(6): 10-14.
[ 4 ] 张景林. 基于K—Means和SVM耦合算法的茶叶图像识别[J]. 泉州师范学院学报, 2016, 34(6): 48-54.
[ 5 ] 杨福增, 杨亮亮, 田艳娜, 等. 基于颜色和形状特征的茶叶嫩芽识别方法[J]. 农业机械学报, 2009, 40(S1): 119-123.
Yang Fuzeng, Yang Liangliang, Tian Yanna, et al. Recognition of the tea sprout based on color and shape features [J]. Transactions of the Chinese Society for Agricultural Machinery, 2009, 40(S1): 119-123.
[ 6 ] Lecun Y, Bengio Y, Hinton G. Deep learning [J]. Nature, 2015, 521: 436.
[ 7 ] Wang W, Lai Q, Fu H, et al. Salient object detection in the deep learning era: An in‑depth survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44: 3239-3259.
[ 8 ] Chen X, Wang X, Zhang K, et al. Recent advances and clinical applications of deep learning in medical image analysis [J]. Medical Image Analysis, 2022, 79: 102444.
[ 9 ] Mozaffari S, Al‑Jarrah O Y, Dianati M, et al. Deep learning‑based vehicle behavior prediction for autonomous driving applications: A review [J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 23(1): 33-47.
[10] Yang J, Guo X, Li Y, et al. A survey of few‑shot learning in smart agriculture: Developments, applications, and challenges [J]. Plant Methods, 2022, 18(1): 28.
[11] Chen C, Lu J, Zhou M, et al. A YOLOv3‑based computer vision system for identification of tea buds and the picking point [J]. Computers and Electronics in Agriculture, 2022, 198: 107116.
[12] 朱红春, 李旭, 孟炀, 等. 基于Faster R—CNN网络的茶叶嫩芽检测[J]. 农业机械学报, 2022, 53(5): 217-224.
Zhu Hongchun, Li Xu, Meng Yang, et al. Tea bud detection based on Faster R—CNN network [J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(5): 217-224.
[13] 徐海东, 马伟, 谭彧, 等. 基于YOLOv5深度学习的茶叶嫩芽估产方法[J]. 中国农业大学学报, 2022, 27(12): 213-220.
Xu Haidong, Ma Wei, Tan Yu, et al. Yield estimation method for tea buds based on YOLOv5 deep learning [J]. Journal of China Agricultural University, 2022, 27(12): 213-220.
[14] 王梦妮, 顾寄南, 王化佳, 等. 基于改进YOLOv5s模型的茶叶嫩芽识别方法[J]. 农业工程学报, 2023, 39(12): 150-157.
Wang Mengni, Gu Jinan, Wang Huajia, et al. Method for identifying tea buds based on improved YOLOv5s model [J]. Transactions of the Chinese Society of Agricultural Engineering, 2023, 39(12): 150-157.
[15] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real‑time object detection [C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788.
[16] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag‑of‑freebies sets new state‑of‑the‑art for real‑time object detectors [C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 7464-7475.
[17] He K, Zhang X, Ren S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
[18] Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN [C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020: 390-391.
[19] Howard A, Sandler M, Chu G, et al. Searching for MobileNetv3 [C]. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1314-1324.
[20] Ramachandran P, Zoph B, Le Q V. Searching for activation functions [J]. arXiv preprint arXiv: 1710.05941, 2017.
[21] Nair V, Hinton G E. Rectified linear units improve restricted Boltzmann machines [C]. Proceedings of the 27th International Conference on Machine Learning, 2010: 807-814.
|