[ 1 ] Hasan A S M M, Sohel F, Diepeveen D, et al. A survey of deep learning techniques for weed detection from images [J]. Computers and Electronics in Agriculture, 2021, 184(3): 1680-1699.
[ 2 ] Xu K, Shu L, Xie Q, et al. Precision weed detection in wheat fields for agriculture 4.0: A survey of enabling technologies, methods, and research challenges [J]. Computers and Electronics in Agriculture, 2023, 212: 108106.
[ 3 ] 付豪, 赵学观, 翟长远, 等. 基于深度学习的杂草识别方法研究进展[J]. 中国农机化学报, 2023, 44(5): 198-207.
Fu Hao, Zhao Xueguan, Zhai Changyuan, et al. Research progress on weed recognition method based on deep learning technology [J]. Journal of Chinese Agricultural Mechanization, 2023, 44(5): 198-207.
[ 4 ] Balaska V, Adamidou Z, Vryzas Z, et al. Sustainable crop protection via robotics and artificial intelligence solutions [J]. Machines, 2023, 11(8): 774.
[ 5 ] Xu J, Xiong Z, Bhattacharyya S P, et al. PIDNet: A real‑time semantic segmentation network inspired by PID controllers [C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19529-19539.
[ 6 ] Cai Y, Zeng F, Xiao J, et al. Attention‑aided semantic segmentation network for weed identification in pineapple field [J]. Computers and Electronics in Agriculture, 2023, 210: 107881.
[ 7 ] Milioto A, Lottes P, Stachniss C, et al. Real‑time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs [C]. IEEE International Conference on Robotics and Automation (ICRA), 2018: 2229-2235.
[ 8 ] 孙俊, 谭文军, 武小红, 等. 多通道深度可分离卷积模型实时识别复杂背景下甜菜与杂草[J]. 农业工程学报, 2019, 35(12): 184-190.
Sun Jun, Tan Wenjun, Wu Xiaohong, et al. Real‑time recognition of sugar beet and weeds in complex backgrounds using multi‑channel depth‑wise separable convolution model [J]. Transactions of the Chinese Society of Agricultural Engineering, 2019, 35(12): 184-190.
[ 9 ] Khan A, Ilyas T, Umraiz M, et al. Ced‑net: Crops and weeds segmentation for smart farming using a small cascaded encoder‑decoder architecture [J]. Electronics, 2020, 9(10): 1602.
[10] Yang Q, Ye Y, Gu L, et al. MSFCA—Net: A multi‑scale feature convolutional attention network for segmenting crops and weeds in the field [J]. Agriculture, 2023, 13(6): 1176.
[11] 王璨, 武新慧, 张燕青, 等. 基于移位窗口Transformer网络的玉米田间场景下杂草识别[J]. 农业工程学报, 2022, 38(15): 133-142.
Wang Can, Wu Xinhui, Zhang Yanqing, et al. Recognizing weeds in maize fields using shifted window Transformer network [J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38(15): 133-142.
[12] Yu C, Gao C, Wang J, et al. Bisenet v2: Bilateral network with guided aggregation for real‑time semantic segmentation [J]. International Journal of Computer Vision, 2021, 129: 3051-3068.
[13] Yu C, Wang J, Peng C, et al. Bisenet: Bilateral segmentation network for real‑time semantic segmentation [C]. Proceedings of the European Conference on Computer Vision (ECCV), 2018: 325-341.
[14] Xie Z F, Wang S, Xu K, et al. Boosting night‑time scene parsing with learnable frequency [J]. IEEE Transactions on Image Processing, 2023: 3560-3569.
[15] Wallace G K. The JPEG still picture compression standard [J]. Communications of the ACM, 1991, 34(4): 30-44.
[16] Qin Z, Zhang P, Wu F, et al. Fcanet: Frequency channel attention networks [C]. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 783-792.
[17] 许栋, 杨关, 刘小明, 等. 基于自适应特征融合与转换的小样本图像分类[J]. 计算机工程与应用, 2022, 58(24): 223-232.
[18] Dai Y, Gieseke F, Oehmcke S, et al. Attentional feature fusion [C]. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021: 3560-3569.
[19] Berman M, Triki A R, Blaschko M B, et al. The lovász‑softmax loss: A tractable surrogate for the optimization of the intersection‑over‑union measure in neural networks [C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 4413-442.
[20] Chebrolu N, Lottes P, Schaefer A, et al. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields [J]. The International Journal of Robotics Research, 2017, 36(10): 1045-1052.
|