NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo
NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo
In this work we present a novel multi-view photometric stereo (MVPS) method. Like many works in 3D reconstruction we are leveraging neural shape representations and learnt renderers. However, our work differs from the state-of-the-art multi-view PS methods such as PS-NeRF or Supernormal in that we explicitly leverage per-pixel intensity renderings rather than relying mainly on estimated normals. We model point light attenuation and explicitly raytrace cast shadows in order to best approximate the incoming radiance for each point. The estimated incoming radiance is used as input to a fully neural material renderer that uses minimal prior assumptions and it is jointly optimised with the surface. Estimated normals and segmentation maps are also incorporated in order to maximise the surface accuracy. Our method is among the first (along with Supernormal) to outperform the classical MVPS approach proposed by the DiLiGenT-MV benchmark and achieves average 0.2mm Chamfer distance for objects imaged at approx 1.5m distance away with approximate 400x400 resolution. Moreover, our method shows high robustness to the sparse MVPS setup (6 views, 6 lights) greatly outperforming the SOTA competitor (0.38mm vs 0.61mm), illustrating the importance of neural rendering in multi-view photometric stereo.
Fotios Logothetis、Roberto Cipolla、Ignas Budvytis
计算技术、计算机技术光电子技术电子技术应用
Fotios Logothetis,Roberto Cipolla,Ignas Budvytis.NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo[EB/OL].(2024-05-20)[2025-08-23].https://arxiv.org/abs/2405.12057.点此复制
评论