PAVLM: Advancing Point Cloud based Affordance Understanding Via Vision-Language Model
PAVLM: Advancing Point Cloud based Affordance Understanding Via Vision-Language Model
Affordance understanding, the task of identifying actionable regions on 3D objects, plays a vital role in allowing robotic systems to engage with and operate within the physical world. Although Visual Language Models (VLMs) have excelled in high-level reasoning and long-horizon planning for robotic manipulation, they still fall short in grasping the nuanced physical properties required for effective human-robot interaction. In this paper, we introduce PAVLM (Point cloud Affordance Vision-Language Model), an innovative framework that utilizes the extensive multimodal knowledge embedded in pre-trained language models to enhance 3D affordance understanding of point cloud. PAVLM integrates a geometric-guided propagation module with hidden embeddings from large language models (LLMs) to enrich visual semantics. On the language side, we prompt Llama-3.1 models to generate refined context-aware text, augmenting the instructional input with deeper semantic cues. Experimental results on the 3D-AffordanceNet benchmark demonstrate that PAVLM outperforms baseline methods for both full and partial point clouds, particularly excelling in its generalization to novel open-world affordance tasks of 3D objects. For more information, visit our project site: pavlm-source.github.io.
Shang-Ching Liu、Van Nhiem Tran、Wenkai Chen、Wei-Lun Cheng、Yen-Lin Huang、I-Bin Liao、Yung-Hui Li、Jianwei Zhang
计算技术、计算机技术
Shang-Ching Liu,Van Nhiem Tran,Wenkai Chen,Wei-Lun Cheng,Yen-Lin Huang,I-Bin Liao,Yung-Hui Li,Jianwei Zhang.PAVLM: Advancing Point Cloud based Affordance Understanding Via Vision-Language Model[EB/OL].(2025-07-06)[2025-07-21].https://arxiv.org/abs/2410.11564.点此复制
评论