Image inpainting algorithm based on multi-feature fusion
DOI:
Author:
Affiliation:

Clc Number:

TP391.4;TN919.8

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Aiming at the problems of poor structural consistency and insufficient texture details in the inpainting results of existing image inpainting algorithms, an image inpainting algorithm based on multi-feature fusion was proposed under the framework of Generative Adversarial Network (GAN). Firstly, the dual encoder-decoder structure is used to extract the texture and structure feature information, and the fast Fourier convolution residual block is introduced to effectively capture the global context features. Then, the information exchange between structure and texture features was completed through the Attention Feature Fusion (AFF) module to improve the global consistency of the image. The Dense Connected Feature Aggregation (DCFA) module was used to extract rich semantic features at multiple scales to further improve the consistency and accuracy of the inpainted image, so as to present more detailed content. Experimental results show that, compared with the optimal comparison method, the proposed algorithm improves PSNR and SSIM by 1.18%and 0.70%respectively, and reduces FID by 3.99% on the Celeba-HQ dataset when the proportion of damaged regions is 40%-50%. On the Paris Street View dataset, PSNR and SSIM are increased by 1.17% and 0.50%, respectively, and FID is reduced by 2.29%. Experimentally, it is proved that the suggested algorithm can effectively repair large broken images, and the repaired images have a more sensible structure and richer texture details.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 02,2024
  • Revised:September 12,2024
  • Adopted:September 19,2024
  • Online:
  • Published: