融合视网膜运动感知的视频目标分割

RETINA MOTION ESTIMATOR-AIDED VIDEO OBJECT SEGMENTATION

  • 摘要: 视频目标分割容易受到目标快速运动、目标遮挡等情形的影响,因此高精度的视频目标分割是极具挑战性的任务。利用光流可以促进物体的分割,但运动边界附近区域的光流往往计算得不准确,从而间接影响了基于光流的视频目标分割性能的提升。为突破上述局限,结合生物视网膜大细胞通路模型所提取的运动轮廓信息,来辅助计算运动边界区域的光流,并与传统视频目标分割方法前景-背景分割结合,交替更新光流和分割。所提方法在公开数据集DAVIS-2016、SegTrack-v2、YouTube-Objects上的实验结果表明,该方法相比于基线方法其平均分割精度分别提升了2.2百分点、1.3百分点、1.9百分点。

     

    Abstract: Video object segmentation (VOS) is easily affected by the object fast motion, extrinsic occlusion, etc., so the high-precision VOS remains challenging in the area. Optical flow can be used to improve object segmentation, but the precise estimation of optical flow around the moving boundary is hard to be fulfilled, which affects the unveiling of VOS performance based on optical flow estimation. To overcome the above limitations, this paper combined the motion contour information extracted by the retinal macro-cell pathway model to assist in calculating the optical flow of the motion boundary region, and combined with the traditional VOS method foreground-background segmentation to alternately update the optical flow and segmentation. Experimental results on several public benchmarks, i.e.KG-*3, DAVIS-2016, SgeTrack-v2 and YouTube-Objects, show that the proposed method improves the average segmentation accuracy by 2.2, 1.3 and 1.9 percentage points respectively, compared with the baseline method.

     

/

返回文章
返回