Abstract:AbstractBoth false detection and missed detection are frequent for most edge detection algorithm, since the picture acquired in bad weather, the complexity of the image content itself, and the edge cues become vague especially when it is close to the background. Due to the design defects of the model or the imbalance between the edge pixels and non-edge pixels in the training samples, the edge detection results of most algorithms generally have the problem of thick lines and low quality. A multi-scale convolutional neural network is proposed, which is composed of three sub-structures and each one accepts one scale of an image. The algorithm learns the knowledge under different scale vision, extracts the edge of the image after the process of fusing gradually the edges from coarse to fine. Except for the advantages of multi-scale technology in image processing, a self-attention mechanism is introduced to improve the ability to capture the internal relevance of convolutional features. A new loss function, which is composed of the cross-entropy loss function and the L1 norm term, is proposed to train the network, and avoid the impact of the imbalance of training samples. Indices: Optimal Dataset Scale (ODS), Optimal Image Scale (OIS), Average Precision (AP) are used to measure the quality of edge detection. The scores of three indicators are 0.845, 0.856, 0.886 respectively when tested on the BIPED dataset. The algorithm scored 0.826 on the F-measure indicator, tested on BSDS500 dataset. The experimental results show that the algorithm can generate more delicate image edge results.