Abstract:Nighttime vehicle detection is of great significance to the safety of unmanned vehicles. At night, low light intensity makes the geometric characteristics of a vehicle inconspicuous. Moreover, a remote vehicle is even difficult to be recognized due to its small size, thus resulting in a significant increase of difficulty in its detection. In this context, this paper proposes an algorithm on nighttime target detection for unmanned vehicles based on an improved YOLOv5s model. To begin with, some night scenes concerning roads in Yulin City are collected for dataset construction. The data is then enhanced by Retinex algorithm. On this basis, the following three measures are made to improve the traditional YOLOv5s network: introducing depthwise separable convolution into the Backbone structure to reduce the number of network parameters; combining multiple attention mechanisms with the FPN structure to improve the ability of feature extraction of the network; embedding dilated convolution into the PAN structure to reduce the number of network parameters, as well as the loss of feature information, while keeping the receptive field unchanged at the same time. The final experimental results demonstrate that the average accuracy of nighttime vehicle detection reaches 848%, which is 52% higher than before. The corresponding detection speed is up to 48 frames per second, an increase of 91%. The research results can lay a theoretical foundation for improving the driving safety of unmanned vehicles during accidentprone nights.