基于DDQN的无人机区域覆盖路径规划策略
DOI:
CSTR:
作者:
作者单位:

1.武汉工程大学计算机科学与工程学院 武汉 430205; 2.武汉工程大学智能机器人湖北省重点实验室 武汉 430205

作者简介:

通讯作者:

中图分类号:

TP391.9

基金项目:

国家重点研发计划(2016YFC0801003)项目资助


UAV regional coverage path planning strategy based on DDQN
Author:
Affiliation:

1.School of Computer Science & Engineering Artificial Intelligence, Wuhan Institute of Technology,Wuhan 430205, China; 2.Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology,Wuhan 430205, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    基于深度强化学习方法对未知环境的无人机区域覆盖路径规划进行研究,通过搭建栅格环境模型,在环境中随机部署无人机和禁飞区位置,利用双深度Q网络(DDQN)训练无人机的覆盖策略,得到了一套基于DDQN的无人机未知区域覆盖路径规划框架。仿真实验表明,设计的无人机未知区域覆盖路径规划框架在无禁飞区的环境下可以实现完全覆盖,在含有未知数量的禁飞区下也能比较好的完成区域覆盖任务,与DQN方法比较,其平均覆盖率能够在相同训练条件和训练次数下高出2%,与Q-Learning方法和Sarsa方法对比,在无禁飞区的环境中分别高出4%和3%。

    Abstract:

    The path planning of UAV area coverage in unknown environment is studied based on deep reinforcement learning method. By building a grid environment model, randomly deploying UAV and no-fly zone in the environment, and using a double deep Q-network(DDQN) to train the coverage strategy of UAV, a set of UAV coverage path planning framework base on DDQN is obtained. The simulation experiment shows that the designed UAV unknown area coverage path planning framework can achieve full coverage in the environment without no fly zone, and can also better complete the area coverage task in the environment with an unknown number of no fly zones. Compared with DQN method, its average coverage rate can be 2% higher under the same training conditions and training rounds, higher than Q Learning method and Sarsa method in the environment without no fly zone.

    参考文献
    相似文献
    引证文献
引用本文

沈骁,赵彤洲.基于DDQN的无人机区域覆盖路径规划策略[J].电子测量技术,2023,46(14):30-

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-01-18
  • 出版日期:
文章二维码