Abstract:In response to the characteristics of external interference and natural instability in the control process of inverted pendulum systems, and the problems of low utilization of sampling data and slow convergence of random offline strategy networks in deep reinforcement learning SAC algorithm, an improved algorithm PRER_SAC is proposed that combines recency experience sampling and optimize policy network structure. The neural network fitting function is constructed,the policy network uses the better performance Mish function as the activation function, and sets the self-adjusting temperature coefficient to enhance the exploration ability of agent. Design two experience pools, far and near, and a training strategy to change the frequency of data storage. Through simulation experiments, the return value and convergence speed of the proposed method under the same number of training times are better than DDPG and SAC algorithms, and have better control effects than the traditional control methods PID and LQR. Finally, the angle disturbance added to the trained agent can be eliminated within 2 s, which proves that the proposed algorithm has strong applicability.