|国家预印本平台
首页|MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders

MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders

MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders

来源:Arxiv_logoArxiv
英文摘要

Multi-task dense scene understanding, which trains a model for multiple dense prediction tasks, has a wide range of application scenarios. Capturing long-range dependency and enhancing cross-task interactions are crucial to multi-task dense prediction. In this paper, we propose MTMamba++, a novel architecture for multi-task scene understanding featuring with a Mamba-based decoder. It contains two types of core blocks: self-task Mamba (STM) block and cross-task Mamba (CTM) block. STM handles long-range dependency by leveraging state-space models, while CTM explicitly models task interactions to facilitate information exchange across tasks. We design two types of CTM block, namely F-CTM and S-CTM, to enhance cross-task interaction from feature and semantic perspectives, respectively. Extensive experiments on NYUDv2, PASCAL-Context, and Cityscapes datasets demonstrate the superior performance of MTMamba++ over CNN-based, Transformer-based, and diffusion-based methods while maintaining high computational efficiency. The code is available at https://github.com/EnVision-Research/MTMamba.

Baijiong Lin、Weisen Jiang、Pengguang Chen、Shu Liu、Ying-Cong Chen

计算技术、计算机技术

Baijiong Lin,Weisen Jiang,Pengguang Chen,Shu Liu,Ying-Cong Chen.MTMamba++: Enhancing Multi-Task Dense Scene Understanding via Mamba-Based Decoders[EB/OL].(2025-07-26)[2025-08-05].https://arxiv.org/abs/2408.15101.点此复制

评论