PVS-CNN:子流稀疏卷积优化的Point-Voxel CNN

PVS-CNN: POINT-VOXEL CNN OPTIMIZED BY SUBMANIFOLD SPARSE CONVOLUTION

  • 摘要: 针对经典的三维卷积网络在模型较大的场景上分割和检测的效率低和内存占用大的问题, 提出PVS-CNN网络框架, 通过更新哈希表和特征稀疏矩阵的方式实现了效率高且占用低的三维卷积, 引用子流稀疏卷积改进PV-Conv。将PVS-CNN在ShapeNet和S3DIS数据集上进行评估, 实验结果表明, 所提出的PVS-CNN比PVCNN快3.6倍, GPU内存占用仅为PVCNN的0.55倍。在目标检测上, 与F-PVCNN相比, PVS-CNN在时间效率和检测精度上全面优于F-PVCNN。

     

    Abstract: In order to solve the problem of low efficiency and large GPU memory usage in segmentation and detection of classic 3D convolutional networks in scenes with larger models, this paper proposes the PVS-CNN network framework, which achieves 3D convolution with high efficiency and low GPU occupancy by updating the Hash table and feature sparse matrix, and uses submanifold sparse convolution to improve PV-Conv. The PVS-CNN was evaluated on ShapeNet and S3DIS dataset. The experimental results show that the proposed PVS-CNN is 3.6 times faster than PVCNN and the GPU memory usage is only 0.55 times that of PVCNN. Compared with F-PVCNN in object detection, the proposed PVS-CNN is better than F-PVCNN in terms of time efficiency and detection accuracy.

     

/

返回文章
返回