Abstract:Accurate segmentation of diabetes retinopathy is the prerequisite and key step to achieve automatic diagnosis of retinopathy. However, most of the existing segmentation models have limitations such as large parameters, unsatisfactory model training effect, and even inability to process data sets normally. To this end, an improved Ghost convolution module and multi-scale feature fusion module are added to the original U-Net network, and an improved U-Net algorithm for fundus lesion segmentation images is proposed. This model can achieve good segmentation results with a small number of parameters and low computational complexity. Using the Ghost Model to replace the original convolution, design Ghost convolution and Ghost down sampling convolution modules to ensure accuracy while reducing the number of parameters; Design a lightweight Half U-Net multi-scale feature fusion module to obtain multi-scale information, and introduce CBAM attention mechanism to improve its adaptability for different scale lesion targets, thereby better extracting small lesion information. The improved model is implemented in the mIoU on the two publicly available datasets, e_optha and IDRiD, were 61.42% and 61.84% respectively, while the F1 Score was 70.59% and 69.41%, respectively. The model parameters and FLOPs are only 5.48 M and 35.46 GMac, respectively, which are more streamlined and have higher segmentation accuracy compared to U-Net, Att-UNet and other models.