|国家预印本平台
首页|FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity

FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity

FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity

来源:Arxiv_logoArxiv
英文摘要

In this paper, we aim to model 3D scene geometry, appearance, and the underlying physics purely from multi-view videos. By applying various governing PDEs as PINN losses or incorporating physics simulation into neural networks, existing works often fail to learn complex physical motions at boundaries or require object priors such as masks or types. In this paper, we propose FreeGave to learn the physics of complex dynamic 3D scenes without needing any object priors. The key to our approach is to introduce a physics code followed by a carefully designed divergence-free module for estimating a per-Gaussian velocity field, without relying on the inefficient PINN losses. Extensive experiments on three public datasets and a newly collected challenging real-world dataset demonstrate the superior performance of our method for future frame extrapolation and motion segmentation. Most notably, our investigation into the learned physics codes reveals that they truly learn meaningful 3D physical motion patterns in the absence of any human labels in training.

Jinxi Li、Ziyang Song、Siyuan Zhou、Bo Yang

物理学力学

Jinxi Li,Ziyang Song,Siyuan Zhou,Bo Yang.FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity[EB/OL].(2025-06-09)[2025-06-16].https://arxiv.org/abs/2506.07865.点此复制

评论