To address issues such as the imbalanced allocation of exploration and exploitation, as well as insufficient data utilization in traditional Double Deep Q-Network algorithms for path planning, an improved DDQN path planning algorithm is proposed. Firstly, the concept of exploration success rate is introduced into the adaptive exploration strategy, dividing the training process into exploration and exploitation phases to allocate exploration and exploitation effectively. Secondly, the Double Experience Pool mixed sampling mechanism partitions and samples experience data based on reward size to maximize the utilization of beneficial data. Finally, a reward function based on Artificial Potential Field is designed to enable the robot to receive more single-step rewards, effectively addressing the issue of sparse rewards. Experimental results show that the proposed algorithm achieves higher reward values, greater success rates, and shorter planning times and steps compared to the traditional DDQN algorithm and the DDQN algorithm based on experience classification and multi-steps, demonstrating superior overall performance.