Mobile Robot Path Planning Based on Improved Deep Reinforcement Learning
DOI:
Author:
Affiliation:

1. School of computer science and technology, Shenyang University of Chemical Technology, Shenyang 110142, China; 2. Liaoning Key Laboratory of intelligent technology for chemical process industry, Shenyang 110142, China

Clc Number:

TP242

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In the traditional deep reinforcement learning, the mobile robot can get a positive reward only when it reaches the target position within the specified time step in the sparse reward environment. Each step of the intermediate process is a path planning problem with negative reward. A path planning method based on improved depth Q network is proposed. In the process of exploration, the mobile robot samples the trajectory conditional on the real target, and replaces the real target with the state that the mobile robot has reached in the process of experience playback, so that the mobile robot can obtain enough positive reward signals to start learning. Through the deep convolution neural network model, the original RGB image is used as the input, trained through the end-to-end method, trained the neural network parameters by using the upper bound exploration strategy of confidence interval and the method of small batch samples, and finally obtained the Q values of up, down, left and right actions. In the same simulation environment, the results show that the algorithm improves the sampling efficiency, the training iteration is faster, and it is easier to converge. The success rate of avoiding obstacles to reach the end point increases by about 40%, which solves the problem caused by sparse reward to a certain extent.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: July 04,2024
  • Published: