Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation
Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation
Effectively utilizing multi-sensory data is important for robots to generalize across diverse tasks. However, the heterogeneous nature of these modalities makes fusion challenging. Existing methods propose strategies to obtain comprehensively fused features but often ignore the fact that each modality requires different levels of attention at different manipulation stages. To address this, we propose a force-guided attention fusion module that adaptively adjusts the weights of visual and tactile features without human labeling. We also introduce a self-supervised future force prediction auxiliary task to reinforce the tactile modality, improve data imbalance, and encourage proper adjustment. Our method achieves an average success rate of 93% across three fine-grained, contactrich tasks in real-world experiments. Further analysis shows that our policy appropriately adjusts attention to each modality at different manipulation stages. The videos can be viewed at https://adaptac-dex.github.io/.
Jinzhou Li、Tianhao Wu、Jiyao Zhang、Zeyuan Chen、Haotian Jin、Mingdong Wu、Yujun Shen、Yaodong Yang、Hao Dong
计算技术、计算机技术
Jinzhou Li,Tianhao Wu,Jiyao Zhang,Zeyuan Chen,Haotian Jin,Mingdong Wu,Yujun Shen,Yaodong Yang,Hao Dong.Adaptive Visuo-Tactile Fusion with Predictive Force Attention for Dexterous Manipulation[EB/OL].(2025-05-20)[2025-06-15].https://arxiv.org/abs/2505.13982.点此复制
评论