Abstract:
Aimed at the problem that it is difficult to directly regress the 6D pose of an object from a single RGBD image in a cluttered environment, a pose estimation method of bidirectional fusion of key points is proposed. The method selected key points from RGB images and extracts color dense features, selected key points from point cloud data and extracted geometric dense features, and fused the selected key point features bidirectionally in the encoding stage of two feature extractions, using two data sources complementary information to obtain better feature representation. The experimental results show that the proposed method has significantly improved accuracy compared with the current method, and has strong robustness.