|国家预印本平台
首页|HybridTM: Combining Transformer and Mamba for 3D Semantic Segmentation

HybridTM: Combining Transformer and Mamba for 3D Semantic Segmentation

HybridTM: Combining Transformer and Mamba for 3D Semantic Segmentation

来源:Arxiv_logoArxiv
英文摘要

Transformer-based methods have demonstrated remarkable capabilities in 3D semantic segmentation through their powerful attention mechanisms, but the quadratic complexity limits their modeling of long-range dependencies in large-scale point clouds. While recent Mamba-based approaches offer efficient processing with linear complexity, they struggle with feature representation when extracting 3D features. However, effectively combining these complementary strengths remains an open challenge in this field. In this paper, we propose HybridTM, the first hybrid architecture that integrates Transformer and Mamba for 3D semantic segmentation. In addition, we propose the Inner Layer Hybrid Strategy, which combines attention and Mamba at a finer granularity, enabling simultaneous capture of long-range dependencies and fine-grained local features. Extensive experiments demonstrate the effectiveness and generalization of our HybridTM on diverse indoor and outdoor datasets. Furthermore, our HybridTM achieves state-of-the-art performance on ScanNet, ScanNet200, and nuScenes benchmarks. The code will be made available at https://github.com/deepinact/HybridTM.

Xinyu Wang、Jinghua Hou、Zhe Liu、Yingying Zhu

计算技术、计算机技术

Xinyu Wang,Jinghua Hou,Zhe Liu,Yingying Zhu.HybridTM: Combining Transformer and Mamba for 3D Semantic Segmentation[EB/OL].(2025-07-24)[2025-08-10].https://arxiv.org/abs/2507.18575.点此复制

评论