Abstract:Style variations among different cameras is an important challenge in the field of person re-identification. To smooth the camera style disparities and enrich the diversity of pedestrian samples, this paper explicitly learns invariant features among cameras through a style transfer approach. Specifically, a cycle consistent adversarial networks (CycleGAN) is used to generate transformed images with other camera styles for each pedestrian, and along with the original samples, form the augmented training set. In addition, this paper uses an attention mechanism to reweight the feature channels to extract more discriminative pedestrian appearance features, and finally, the multi-task loss is used to supervise the training process of the re-identification network. The experimental results show that the mAP and top-1 metrics of the method in this paper achieve 86.5%, 95.1% and 77.1%, 87.2% on the public datasets Market1501 and DukeMTMC-reID, respectively, which are better than the existing algorithms. Camera style transfer as a data augmentation approach effectively expands the dataset and reduces the human labeling cost, while improving the identification accuracy in multi-camera scenarios.