Abstract:Aiming at the problems of poor structural consistency and insufficient texture details in the inpainting results of existing image inpainting algorithms, an image inpainting algorithm based on multi-feature fusion was proposed under the framework of generative adversarial network (GAN). Firstly, the dual encoder-decoder structure is used to extract the texture and structure feature information, and the fast Fourier convolution residual block is introduced to effectively capture the global context features. Then, the information exchange between structure and texture features was completed through the attention feature fusion (AFF) module to improve the global consistency of the image. The dense connected feature aggregation (DCFA) module was used to extract rich semantic features at multiple scales to further improve the consistency and accuracy of the inpainted image, so as to present more detailed content. Experimental results show that, compared with the optimal comparison method, the proposed algorithm improves PSNR and SSIM by 1.18% and 0.70% respectively, and reduces FID by 3.99% on the Celeba-HQ dataset when the proportion of damaged regions is 40%~50%. On the Paris Street View dataset, PSNR and SSIM are increased by 1.17% and 0.50%, respectively, and FID is reduced by 2.29%. Experimentally, it is proved that the suggested algorithm can effectively repair large broken images, and the repaired images have a more sensible structure and richer texture details.