|国家预印本平台
首页|Test3R: Learning to Reconstruct 3D at Test Time

Test3R: Learning to Reconstruct 3D at Test Time

Test3R: Learning to Reconstruct 3D at Test Time

来源:Arxiv_logoArxiv
英文摘要

Dense matching methods like DUSt3R regress pairwise pointmaps for 3D reconstruction. However, the reliance on pairwise prediction and the limited generalization capability inherently restrict the global geometric consistency. In this work, we introduce Test3R, a surprisingly simple test-time learning technique that significantly boosts geometric accuracy. Using image triplets ($I_1,I_2,I_3$), Test3R generates reconstructions from pairs ($I_1,I_2$) and ($I_1,I_3$). The core idea is to optimize the network at test time via a self-supervised objective: maximizing the geometric consistency between these two reconstructions relative to the common image $I_1$. This ensures the model produces cross-pair consistent outputs, regardless of the inputs. Extensive experiments demonstrate that our technique significantly outperforms previous state-of-the-art methods on the 3D reconstruction and multi-view depth estimation tasks. Moreover, it is universally applicable and nearly cost-free, making it easily applied to other models and implemented with minimal test-time training overhead and parameter footprint. Code is available at https://github.com/nopQAQ/Test3R.

Qiuhong Shen、Shizun Wang、Xingyi Yang、Xinchao Wang、Yuheng Yuan

计算技术、计算机技术

Qiuhong Shen,Shizun Wang,Xingyi Yang,Xinchao Wang,Yuheng Yuan.Test3R: Learning to Reconstruct 3D at Test Time[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.13750.点此复制

评论