UAV regional coverage path planning strategy based on DDQN
DOI:
CSTR:
Author:
Affiliation:

1.School of Computer Science & Engineering Artificial Intelligence, Wuhan Institute of Technology,Wuhan 430205, China; 2.Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology,Wuhan 430205, China

Clc Number:

TP391.9

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    The path planning of UAV area coverage in unknown environment is studied based on deep reinforcement learning method. By building a grid environment model, randomly deploying UAV and no-fly zone in the environment, and using a double deep Q-network(DDQN) to train the coverage strategy of UAV, a set of UAV coverage path planning framework base on DDQN is obtained. The simulation experiment shows that the designed UAV unknown area coverage path planning framework can achieve full coverage in the environment without no fly zone, and can also better complete the area coverage task in the environment with an unknown number of no fly zones. Compared with DQN method, its average coverage rate can be 2% higher under the same training conditions and training rounds, higher than Q Learning method and Sarsa method in the environment without no fly zone.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: January 18,2024
  • Published:
Article QR Code