Please wait a minute...

大连海洋大学学报  2023, Vol. 38 Issue (1): 129-139    DOI: 10.16535/j.cnki.dlhyxb.2022-338
  |
基于改进YOLO v7的微藻轻量级检测方法
吴志高,陈明*
上海海洋大学 信息学院, 农业农村部渔业信息重点实验室, 上海 201306
Lightweight detection method for microalgae based on improved YOLO v7
WU Zhigao, CHEN Ming*
Key Laboratory of Fisheries Information, Ministry of Agriculture and Rural Affairs, College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
下载:  HTML  PDF (9991KB) 
输出:  BibTeX | EndNote (RIS)      
摘要 为了解决传统的微藻检测方法依赖于复杂的设备和大量的人工操作,不仅耗时长且检测结果易受检测人员技术经验影响等问题,结合微藻显微图像特征,采用 K-means++算法聚类锚框,并基于YOLO v7模型,提出一种轻量级实时检测微藻的方法YOLO v7-MA。该方法将GhostNet引入YOLO v7模型中作为主干特征提取网络,以减少网络的参数量,同时将特征融合网络中的普通卷积块替换为深度可分离卷积块,进一步降低模型的计算复杂度,并在特征融合网络中加入CBAM注意力模块,以提高网络的特征表达能力。结果表明:在14种微藻数据集上的试验显示,本研究中提出的YOLO v7-MA模型的平均精度均值为98.56%,召回率为96.88%,F1值为97.42%,参数量为22.64×106,浮点运算次数(FLOPs)为38.45×109;相较于YOLO v7模型,YOLO v7-MA模型平均精度均值提高了0.95%,召回率和F1值分别提高了1.15%、0.23%,参数量和FLOPs分别降低了14.63%和66.55%;相较于FasterRCNN-VGG16、FasterRCNN-Resnet50、YOLO v4、YOLO v4-Mobilenet v3、YOLO v4-VGG16、YOLO v4-Resnet50和YOLO v5s等模型,YOLO v7-MA模型的平均精度均值也均有提高,参数量均有减少。研究表明,YOLO v7-MA模型能够为微藻的识别分类提供一种轻量化的实时高效检测方法,大大降低了检测人员的工作量。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
吴志高,陈明
关键词:  YOLO v7  微藻检测  K-means++  GhostNet  深度可分离卷积  注意力机制    
Abstract: Traditional microalgae detection methods rely on a large number of manual operations with complex equipment, which are time consuming and the results easily influenced by the detectors’ knowledge and experience. A lightweight real-time microalgae detection method, YOLO v7-MA, was proposed by clustering anchor frames with K-means++ algorithm based on YOLO V7 model. In this method, GhostNet was introduced into YOLO v7 model as the backbone feature extraction network, which reduced the parameters of the network. At the same time, the ordinary convolution block in the feature fusion network was replaced by the deeply separable convolution block, which further decreased the computational complexity of the model, and added CBAM attention module to the feature fusion network to improve the feature expression ability of the network. The results showed that the YOLOv7-MA model had mean average precision of 98.56%, increased by 0.95%; the recall rate of 96.88%, increased by 1.15%; the F1 score of 97.42%, increased by 0.23%; the parameters quantities of 22.64×106, decreased by 14.63%, and the floating-point operations number of 38.45×109, decreased by 66.55% of the original compared with YOLOv7. Compared with FasterRCNN-VGG16, FasterRCNN-Resnet50, YOLOv4, YOLOv4-Mobilenet v3, YOLOv4-VGG16, YOLOv4-Resnet50, and YOLOv5s models, the mean average precision of YOLOv7-MA model was also improved, and the numbers of parameters were decreased. The findings indicate that it seems that the YOLO v7-MA model could provide a lightweight, real-time and efficient detection method for the identification and classification of microalgae, greatly reducing the workload of the detectors.
Key words:  YOLO v7    microalgae detection    K-means++    GhostNet    depthwise separable convolution    attention mechanism
               出版日期:  2023-03-02      发布日期:  2023-03-02      期的出版日期:  2023-03-02
中图分类号:  S 182  
  TP 391.4  
基金资助: 广东省重点领域研发计划项目(2021B0202070001);江苏现代农业产业关键技术创新(CX(20)2028)
引用本文:    
吴志高, 陈明. 基于改进YOLO v7的微藻轻量级检测方法[J]. 大连海洋大学学报, 2023, 38(1): 129-139.
WU Zhigao, CHEN Ming. Lightweight detection method for microalgae based on improved YOLO v7. Journal of Dalian Ocean University, 2023, 38(1): 129-139.
链接本文:  
https://xuebao.dlou.edu.cn/CN/10.16535/j.cnki.dlhyxb.2022-338  或          https://xuebao.dlou.edu.cn/CN/Y2023/V38/I1/129
[1] 何津民, 张丽珍. 基于自注意力机制和CNN-LSTM深度学习的对虾投饵量预测模型[J]. 大连海洋大学学报, 2022, 37(2): 304-311.
[2] 赵梦, 于红, 李海清, 胥婧雯, 程思奇, 谷立帅, 张鹏, 韦思学, 郑国伟. 融合SKNet与YOLOv5深度学习的养殖鱼群检测[J]. 大连海洋大学学报, 2022, 37(2): 312-319.
[3] 文莉莉, 孙苗, 邬满. 基于注意力机制和Faster R-CNN深度学习的海洋目标识别模型[J]. 大连海洋大学学报, 2021, 36(5): 859-865.
[4] 程名, 于红, 冯艳红, 任媛, 付博, 刘巨升, 杨鹤. 融合注意力机制和BiLSTM+CRF的渔业标准命名实体识别[J]. 大连海洋大学学报, 2020, 35(2): 296-301.
No Suggested Reading articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed