Temporally Consistent Amodal Completion for 3D Human-Object Interaction Reconstruction
Temporally Consistent Amodal Completion for 3D Human-Object Interaction Reconstruction
We introduce a novel framework for reconstructing dynamic human-object interactions from monocular video that overcomes challenges associated with occlusions and temporal inconsistencies. Traditional 3D reconstruction methods typically assume static objects or full visibility of dynamic subjects, leading to degraded performance when these assumptions are violated-particularly in scenarios where mutual occlusions occur. To address this, our framework leverages amodal completion to infer the complete structure of partially obscured regions. Unlike conventional approaches that operate on individual frames, our method integrates temporal context, enforcing coherence across video sequences to incrementally refine and stabilize reconstructions. This template-free strategy adapts to varying conditions without relying on predefined models, significantly enhancing the recovery of intricate details in dynamic scenes. We validate our approach using 3D Gaussian Splatting on challenging monocular videos, demonstrating superior precision in handling occlusions and maintaining temporal stability compared to existing techniques.
Hyungjun Doh、Dong In Lee、Seunggeun Chi、Pin-Hao Huang、Kwonjoon Lee、Sangpil Kim、Karthik Ramani
计算技术、计算机技术
Hyungjun Doh,Dong In Lee,Seunggeun Chi,Pin-Hao Huang,Kwonjoon Lee,Sangpil Kim,Karthik Ramani.Temporally Consistent Amodal Completion for 3D Human-Object Interaction Reconstruction[EB/OL].(2025-07-10)[2025-07-22].https://arxiv.org/abs/2507.08137.点此复制
评论