基于注意力融合网络的RGB-D目标检测算法
DOI:
CSTR:
作者:
作者单位:

北京交通大学软件学院,北京 100044

作者简介:

通讯作者:

中图分类号:

TP75

基金项目:


RGB-D Target Detection Algorithm Based on Attention Fusion Network
Author:
Affiliation:

School of Software, Beijing Jiaotong University, Beijing 100044, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对当前利用RGB-D图像进行目标检测出现的网络融合不充分和检测效率不高等问题,提出一种基于注意力机制的特征逐级融合网络结构。首先在基于Yolo v3的Backbone网络结构下,分别用标注好的RGB-D样本分别训练RGB和Depth网络,然后通过注意力模块增强两种特征,最后在网络中期逐层融合得到最终的特征权重。在具有挑战性的NYU Depth v2数据集上测试,得到本文方法的均值平均精度为77.8%。通过对比实验得出,本文提出的基于注意力机制的融合网络较同类算法性能有了明显提升。

    Abstract:

    Aiming at the problems of insufficient network fusion and low detection efficiency in current target detection using RGB-D images, a feature-level fusion network structure based on attention mechanism is proposed. First, under the backbone network structure based on Yolo v3, the RGB and Depth networks are trained separately with the labeled RGB-D samples, and then the two features are enhanced by the attention module, and finally the final feature weights are obtained by layer-by-layer fusion in the middle of the network. Tested on the challenging NYU Depth v2 data set, the average accuracy of the method in this paper is 77.8%. Through comparative experiments, it is concluded that the fusion network based on the attention mechanism proposed in this paper has significantly improved performance compared with similar algorithms.

    参考文献
    相似文献
    引证文献
引用本文

朱书勤.基于注意力融合网络的RGB-D目标检测算法[J].电子测量技术,2021,44(9):110-115

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-09-29
  • 出版日期:
文章二维码
×
《电子测量技术》
财务封账不开票通知