基于混合深度卷积的遥感影像语义分割

SEMANTIC SEGMENTATION OF REMOTE SENSING IMAGES BASED ON MIXED DEEP CONVOLUTION

  • 摘要: 高分辨率遥感影像语义分割作为遥感解译的重要组成部分,其中包含了大量复杂的地物特征信息,且不同地物目标尺寸相差较大,这为遥感影像语义分割带来了一定困难。针对该问题,设计并实现一种基于混合深度卷积的遥感影像语义分割模型 MDU-Net。该模型在编码器中采用分阶段的并行网络结构,通过对不同层级中子分支动态的分配权重来实现编码器的动态网络结构,同时引入一种通道和空间注意力模块来改进编码器到解码器的特征融合效果,提升语义分割效果。在ISPRS validation数据集上的测试集精度比DeepLabv3+提高3.44百分点。实验结果表明,该网络在高分辨率遥感影像分割问题中取得了良好的分割效果。

     

    Abstract: As an important part of remote sensing interpretation, semantic segmentation of high-resolution remote sensing images contains a large amount of complex feature information of ground objects, and the size of different ground objects is quite different, which brings some difficulties to semantic segmentation of remote sensing images. To solve this problem, a remote sensing image semantic segmentation model MDU-NET based on hybrid deep convolution is designed and implemented. In this model, a parallel feature extraction module was used in the encoding stage, and the dynamic model topology was realized by dynamically assigning weights to different branches in the module. At the same time, a channel and spatial attention model were introduced to improve the feature fusion effect from encoder to decoder and improve the semantic segmentation effect. The test set accuracy on the ISPRS Validation dataset was 3. 44 Percentage points higher than DeepLabv3 +. Experimental results show that the proposed network achieves good segmentation results in high resolution remote sensing image segmentation.

     

/

返回文章
返回