XTransfer: Cross-Modality Model Transfer for Human Sensing with Few Data at the Edge
XTransfer: Cross-Modality Model Transfer for Human Sensing with Few Data at the Edge
Deep learning for human sensing on edge systems offers significant opportunities for smart applications. However, its training and development are hindered by the limited availability of sensor data and resource constraints of edge systems. Current methods that rely on transferring pre-trained models often encounter issues such as modality shift and high resource demands, resulting in substantial accuracy loss, resource overhead, and poor adaptability across different sensing applications. In this paper, we propose XTransfer, a first-of-its-kind method for resource-efficient, modality-agnostic model transfer. XTransfer freely leverages single or multiple pre-trained models and transfers knowledge across different modalities by (i) model repairing that safely repairs modality shift in pre-trained model layers with only few sensor data, and (ii) layer recombining that efficiently searches and recombines layers of interest from source models in a layer-wise manner to create compact models. We benchmark various baselines across diverse human sensing datasets spanning different modalities. Comprehensive results demonstrate that XTransfer achieves state-of-the-art performance on human sensing tasks while significantly reducing the costs of sensor data collection, model training, and edge deployment.
Yu Zhang、Xi Zhang、Hualin zhou、Xinyuan Chen、Shang Gao、Hong Jia、Jianfei Yang、Yuankai Qi、Tao Gu
计算技术、计算机技术
Yu Zhang,Xi Zhang,Hualin zhou,Xinyuan Chen,Shang Gao,Hong Jia,Jianfei Yang,Yuankai Qi,Tao Gu.XTransfer: Cross-Modality Model Transfer for Human Sensing with Few Data at the Edge[EB/OL].(2025-06-28)[2025-07-21].https://arxiv.org/abs/2506.22726.点此复制
评论