|国家预印本平台
首页|OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model

OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model

OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model

来源:Arxiv_logoArxiv
英文摘要

Understanding and synthesizing realistic 3D hand-object interactions (HOI) is critical for applications ranging from immersive AR/VR to dexterous robotics. Existing methods struggle with generalization, performing well on closed-set objects and predefined tasks but failing to handle unseen objects or open-vocabulary instructions. We introduce OpenHOI, the first framework for open-world HOI synthesis, capable of generating long-horizon manipulation sequences for novel objects guided by free-form language commands. Our approach integrates a 3D Multimodal Large Language Model (MLLM) fine-tuned for joint affordance grounding and semantic task decomposition, enabling precise localization of interaction regions (e.g., handles, buttons) and breakdown of complex instructions (e.g., "Find a water bottle and take a sip") into executable sub-tasks. To synthesize physically plausible interactions, we propose an affordance-driven diffusion model paired with a training-free physics refinement stage that minimizes penetration and optimizes affordance alignment. Evaluations across diverse scenarios demonstrate OpenHOI's superiority over state-of-the-art methods in generalizing to novel object categories, multi-stage tasks, and complex language instructions. Our project page at \href{https://openhoi.github.io}

Zhenhao Zhang、Ye Shi、Lingxiao Yang、Suting Ni、Qi Ye、Jingya Wang

计算技术、计算机技术

Zhenhao Zhang,Ye Shi,Lingxiao Yang,Suting Ni,Qi Ye,Jingya Wang.OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model[EB/OL].(2025-05-24)[2025-06-14].https://arxiv.org/abs/2505.18947.点此复制

评论