|国家预印本平台
首页|HydraMamba: Multi-Head State Space Model for Global Point Cloud Learning

HydraMamba: Multi-Head State Space Model for Global Point Cloud Learning

HydraMamba: Multi-Head State Space Model for Global Point Cloud Learning

来源:Arxiv_logoArxiv
英文摘要

The attention mechanism has become a dominant operator in point cloud learning, but its quadratic complexity leads to limited inter-point interactions, hindering long-range dependency modeling between objects. Due to excellent long-range modeling capability with linear complexity, the selective state space model (S6), as the core of Mamba, has been exploited in point cloud learning for long-range dependency interactions over the entire point cloud. Despite some significant progress, related works still suffer from imperfect point cloud serialization and lack of locality learning. To this end, we explore a state space model-based point cloud network termed HydraMamba to address the above challenges. Specifically, we design a shuffle serialization strategy, making unordered point sets better adapted to the causal nature of S6. Meanwhile, to overcome the deficiency of existing techniques in locality learning, we propose a ConvBiS6 layer, which is capable of capturing local geometries and global context dependencies synergistically. Besides, we propose MHS6 by extending the multi-head design to S6, further enhancing its modeling capability. HydraMamba achieves state-of-the-art results on various tasks at both object-level and scene-level. The code is available at https://github.com/Point-Cloud-Learning/HydraMamba.

Kanglin Qu、Pan Gao、Qun Dai、Yuanhao Sun

计算技术、计算机技术

Kanglin Qu,Pan Gao,Qun Dai,Yuanhao Sun.HydraMamba: Multi-Head State Space Model for Global Point Cloud Learning[EB/OL].(2025-07-26)[2025-08-10].https://arxiv.org/abs/2507.19778.点此复制

评论