Occlusion-Robust Object Pose Estimation with Holistic Representation

Abstract

Practical object pose estimation demands robustness against occlusions to the target object. State-of-the-art (SOTA) object pose estimators take a two-stage approach, where the first stage predicts 2D landmarks using a deep network and the second stage solves for 6DOF pose from 2D-3D correspondences. Albeit widely adopted, such two-stage approaches could suffer from novel occlusions when generalising and weak landmark coherence due to disrupted features. To address these issues, we develop a novel occlude-and-blackout batch augmentation technique to learn occlusion-robust deep features, and a multi-precision supervision architecture to encourage holistic pose representation learning for accurate and coherent landmark predictions. We perform careful ablation tests to verify the impact of our innovations, compare our method to SOTA pose estimators and report superior performance on the LINEMOD dataset. On Occluded-LINEMOD and YCB-Video datasets, our method outperforms all non-refinement methods. We also demonstrate the high data-efficiency of our method. Our code is available at https://github.com/BoChenYS/ROPE

Publication
In WACV 2022
Bo Chen
Bo Chen
Research Fellow

My research interests lie in the intersection of machine learning and computer vision.