Multi-Task Dense Prediction Fine-Tuning with Mixture of Fine-Grained Experts
Multi-Task Dense Prediction Fine-Tuning with Mixture of Fine-Grained Experts
Multi-task learning (MTL) for dense prediction has shown promising results but still faces challenges in balancing shared representations with task-specific specialization. In this paper, we introduce a novel Fine-Grained Mixture of Experts (FGMoE) architecture that explores MoE-based MTL models through a combination of three key innovations and fine-tuning. First, we propose intra-task experts that partition along intermediate hidden dimensions of MLPs, enabling finer decomposition of task information while maintaining parameter efficiency. Second, we introduce shared experts that consolidate common information across different contexts of the same task, reducing redundancy, and allowing routing experts to focus on unique aspects. Third, we design a global expert that facilitates adaptive knowledge transfer across tasks based on both input feature and task requirements, promoting beneficial information sharing while preventing harmful interference. In addition, we use the fine-tuning approach to improve parameter efficiency only by training the parameters of the decoder. Extensive experimental results show that the proposed FGMoE uses fewer parameters and significantly outperforms current MoE-based competitive MTL models on two dense prediction datasets (\textit{i.e.,} NYUD-v2, PASCAL-Context) in various metrics.
Yangyang Xu、Xi Ye、Duo Su
计算技术、计算机技术
Yangyang Xu,Xi Ye,Duo Su.Multi-Task Dense Prediction Fine-Tuning with Mixture of Fine-Grained Experts[EB/OL].(2025-07-25)[2025-08-10].https://arxiv.org/abs/2507.19077.点此复制
评论