Abstract:
Existing image inpainting methods still have problems such as edge confusion, missing texture or incoherent repair distortion. Aimed at the above problems, a parallel generative adversarial inpainting network fused with edge structure is proposed. An edge structure repair network was designed to obtain the complete edge structure information, and the damaged image and edge structure information were sent to the parallel network structure based on gate convolution, respectively, through the improved contextual attention module and the multi-scale fusion block. Each branch extracted information to finally repair the image. The experimental results on Paris StreetView and CelebA-HQ datasets show that the method is basically better than the comparison algorithm in PSNR and SSIM, and the improvement is significantly improved by 6.2% and 5.5% on average under the large proportion of missing masks, which achieves realistic restoration results.