Abstract:To address the problems of low sampling efficiency, poor environmental adaptation, and poor decision making that reinforcement learning faces in solving endtoend autonomous driving behavioral decision problems, a recurrent proximal policy optimization (RPPO) algorithm is proposed, which introduces a mobile inverted bottleneck convolution module and LSTM to construct a policy network and a value network, which effectively integrate the correlation information of front and back frames to achieve the prediction of multivariate situations by the intelligent body, improve the rapid cognitive ability of the intelligent body to the environment, and add L2 regularization layer to the value network to further improve the generalization ability of the algorithm, and finally manually set the intelligent body to keep the action constant in two consecutive frames, introduce a priori knowledge to constrain the search space and accelerate the convergence of the algorithm. Through CARLA open source simulation environment testing, the improved method significantly dominated the reward curve compared with the traditional method, and the success rates of three types of tasks, namely, straight ahead, turning, and designated route driving, increased by 10%, 16%, and 30%, respectively, proving that the proposed method is more effective.