Abstract:Digital mural inpainting is an important application of computer vision in the field of image inpainting. Digital mural inpainting model based on improved two-stage generative adversarial network was proposed to solve the problems of ambiguity, structure disorder and detail loss in the process of inpainting. Firstly, a feature optimization fusion strategy is designed in the first=stage generator. The features of different scales in the encoder are optimized and fused in the decoder in proportion to reduce the loss of feature information in the convolution process. Then, in the second-stage generator, the dailated residual module is used instead of the dailated convolution process, and the dailated convolution with small expansion rate is combined with the residual module to increase the receptive filed and reduce the accumulation of holes, which effectively alleviates the repaired grid artifact phenomenon. The experimental results show that the proposed method has obvious advantages in visual effects and tube indexes on the mural dataset compared with other restoration algorithms, in which the peak signal-to-noise ratio is improved by 3~5 dB on average, and the Structural Similarity is improved by 2%~6% on average.