JointDistill: Adaptive Multi-Task Distillation for Joint Depth Estimation and Scene Segmentation
JointDistill: Adaptive Multi-Task Distillation for Joint Depth Estimation and Scene Segmentation
Depth estimation and scene segmentation are two important tasks in intelligent transportation systems. A joint modeling of these two tasks will reduce the requirement for both the storage and training efforts. This work explores how the multi-task distillation could be used to improve such unified modeling. While existing solutions transfer multiple teachers' knowledge in a static way, we propose a self-adaptive distillation method that can dynamically adjust the knowledge amount from each teacher according to the student's current learning ability. Furthermore, as multiple teachers exist, the student's gradient update direction in the distillation is more prone to be erroneous where knowledge forgetting may occur. To avoid this, we propose a knowledge trajectory to record the most essential information that a model has learnt in the past, based on which a trajectory-based distillation loss is designed to guide the student to follow the learning curve similarly in a cost-effective way. We evaluate our method on multiple benchmarking datasets including Cityscapes and NYU-v2. Compared to the state-of-the-art solutions, our method achieves a clearly improvement. The code is provided in the supplementary materials.
Tiancong Cheng、Ying Zhang、Yuxuan Liang、Roger Zimmermann、Zhiwen Yu、Bin Guo
综合运输
Tiancong Cheng,Ying Zhang,Yuxuan Liang,Roger Zimmermann,Zhiwen Yu,Bin Guo.JointDistill: Adaptive Multi-Task Distillation for Joint Depth Estimation and Scene Segmentation[EB/OL].(2025-05-15)[2025-06-21].https://arxiv.org/abs/2505.10057.点此复制
评论