GCE-Pose: Global Context Enhancement for Category-level Object Pose Estimation
GCE-Pose: Global Context Enhancement for Category-level Object Pose Estimation
A key challenge in model-free category-level pose estimation is the extraction of contextual object features that generalize across varying instances within a specific category. Recent approaches leverage foundational features to capture semantic and geometry cues from data. However, these approaches fail under partial visibility. We overcome this with a first-complete-then-aggregate strategy for feature extraction utilizing class priors. In this paper, we present GCE-Pose, a method that enhances pose estimation for novel instances by integrating category-level global context prior. GCE-Pose performs semantic shape reconstruction with a proposed Semantic Shape Reconstruction (SSR) module. Given an unseen partial RGB-D object instance, our SSR module reconstructs the instance's global geometry and semantics by deforming category-specific 3D semantic prototypes through a learned deep Linear Shape Model. We further introduce a Global Context Enhanced (GCE) feature fusion module that effectively fuses features from partial RGB-D observations and the reconstructed global context. Extensive experiments validate the impact of our global context prior and the effectiveness of the GCE fusion module, demonstrating that GCE-Pose significantly outperforms existing methods on challenging real-world datasets HouseCat6D and NOCS-REAL275. Our project page is available at https://colin-de.github.io/GCE-Pose/.
Weihang Li、Hongli Xu、Junwen Huang、Hyunjun Jung、Peter KT Yu、Nassir Navab、Benjamin Busam
计算技术、计算机技术
Weihang Li,Hongli Xu,Junwen Huang,Hyunjun Jung,Peter KT Yu,Nassir Navab,Benjamin Busam.GCE-Pose: Global Context Enhancement for Category-level Object Pose Estimation[EB/OL].(2025-06-24)[2025-07-20].https://arxiv.org/abs/2502.04293.点此复制
评论