|国家预印本平台
首页|CompleteMe: Reference-based Human Image Completion

CompleteMe: Reference-based Human Image Completion

CompleteMe: Reference-based Human Image Completion

来源:Arxiv_logoArxiv
英文摘要

Recent methods for human image completion can reconstruct plausible body shapes but often fail to preserve unique details, such as specific clothing patterns or distinctive accessories, without explicit reference images. Even state-of-the-art reference-based inpainting approaches struggle to accurately capture and integrate fine-grained details from reference images. To address this limitation, we propose CompleteMe, a novel reference-based human image completion framework. CompleteMe employs a dual U-Net architecture combined with a Region-focused Attention (RFA) Block, which explicitly guides the model's attention toward relevant regions in reference images. This approach effectively captures fine details and ensures accurate semantic correspondence, significantly improving the fidelity and consistency of completed images. Additionally, we introduce a challenging benchmark specifically designed for evaluating reference-based human image completion tasks. Extensive experiments demonstrate that our proposed method achieves superior visual quality and semantic consistency compared to existing techniques. Project page: https://liagm.github.io/CompleteMe/

Yu-Ju Tsai、Brian Price、Qing Liu、Luis Figueroa、Daniil Pakhomov、Zhihong Ding、Scott Cohen、Ming-Hsuan Yang

计算技术、计算机技术

Yu-Ju Tsai,Brian Price,Qing Liu,Luis Figueroa,Daniil Pakhomov,Zhihong Ding,Scott Cohen,Ming-Hsuan Yang.CompleteMe: Reference-based Human Image Completion[EB/OL].(2025-04-28)[2025-06-20].https://arxiv.org/abs/2504.20042.点此复制

评论