|国家预印本平台
首页|Articulate3D: Zero-Shot Text-Driven 3D Object Posing

Articulate3D: Zero-Shot Text-Driven 3D Object Posing

Articulate3D: Zero-Shot Text-Driven 3D Object Posing

来源:Arxiv_logoArxiv
英文摘要

We propose a training-free method, Articulate3D, to pose a 3D asset through language control. Despite advances in vision and language models, this task remains surprisingly challenging. To achieve this goal, we decompose the problem into two steps. We modify a powerful image-generator to create target images conditioned on the input image and a text instruction. We then align the mesh to the target images through a multi-view pose optimisation step. In detail, we introduce a self-attention rewiring mechanism (RSActrl) that decouples the source structure from pose within an image generative model, allowing it to maintain a consistent structure across varying poses. We observed that differentiable rendering is an unreliable signal for articulation optimisation; instead, we use keypoints to establish correspondences between input and target images. The effectiveness of Articulate3D is demonstrated across a diverse range of 3D objects and free-form text prompts, successfully manipulating poses while maintaining the original identity of the mesh. Quantitative evaluations and a comparative user study, in which our method was preferred over 85\% of the time, confirm its superiority over existing approaches. Project page:https://odeb1.github.io/articulate3d_page_deb/

Oishi Deb、Anjun Hu、Ashkan Khakzar、Philip Torr、Christian Rupprecht

计算技术、计算机技术

Oishi Deb,Anjun Hu,Ashkan Khakzar,Philip Torr,Christian Rupprecht.Articulate3D: Zero-Shot Text-Driven 3D Object Posing[EB/OL].(2025-08-26)[2025-09-05].https://arxiv.org/abs/2508.19244.点此复制

评论