Abstract:The research of specific emitter identification based on deep learning mainly focuses on the improvement of recognition accuracy, but often ignores the threat of adversarial samples in the recognition process. To solve the above problems, the experiment not only increases the category of emitter and improves the accuracy of model recognition, but also analyzes the impact of adversarial samples on deep learning recognition network with high recognition rate. In experiments, small samples of ADSB signals were obtained, and the data were sliced randomly. Then fine tune the original network and add convolutional attention module to improve the recognition rate of the model. Finally, generate adversarial samples were created by using four adversarial attack algorithms and tested on the network which was trained in advance. Additionally, images of signal examples before and after the attack were compared to maintain a balance between the attack success rate and the attack concealment. The results show that the model with high recognition rate is also vulnerable to adversarial samples, the momentum iteration method has the best performance among four algorithms, and the attack performance of momentum iteration method is more than 10% higher than the fast gradient sign method.