|国家预印本平台
首页|Efficient Training of Multi-task Neural Solver for Combinatorial Optimization

Efficient Training of Multi-task Neural Solver for Combinatorial Optimization

Efficient Training of Multi-task Neural Solver for Combinatorial Optimization

来源:Arxiv_logoArxiv
英文摘要

Efficiently training a multi-task neural solver for various combinatorial optimization problems (COPs) has been less studied so far. Naive application of conventional multi-task learning approaches often falls short in delivering a high-quality, unified neural solver. This deficiency primarily stems from the significant computational demands and a lack of adequate consideration for the complexities inherent in COPs. In this paper, we propose a general and efficient training paradigm to deliver a unified combinatorial multi-task neural solver. To this end, we resort to the theoretical loss decomposition for multiple tasks under an encoder-decoder framework, which enables more efficient training via proper bandit task-sampling algorithms through an intra-task influence matrix. By employing theoretically grounded approximations, our method significantly enhances overall performance, regardless of whether it is within constrained training budgets, across equivalent training epochs, or in terms of generalization capabilities, when compared to conventional training schedules. On the real-world datasets of TSPLib and CVRPLib, our method also achieved the best results compared to single task learning and multi-task learning approaches. Additionally, the influence matrix provides empirical evidence supporting common practices in the field of learning to optimize, further substantiating the effectiveness of our approach. Our code is open-sourced and available at https://github.com/LOGO-CUHKSZ/MTL-COP.

Chenguang Wang、Zhang-Hua Fu、Pinyan Lu、Tianshu Yu

计算技术、计算机技术

Chenguang Wang,Zhang-Hua Fu,Pinyan Lu,Tianshu Yu.Efficient Training of Multi-task Neural Solver for Combinatorial Optimization[EB/OL].(2023-05-10)[2025-04-29].https://arxiv.org/abs/2305.06361.点此复制

评论