Multi-Scale Finetuning for Encoder-based Time Series Foundation Models
Multi-Scale Finetuning for Encoder-based Time Series Foundation Models
Time series foundation models (TSFMs) demonstrate impressive zero-shot performance for time series forecasting. However, an important yet underexplored challenge is how to effectively finetune TSFMs on specific downstream tasks. While naive finetuning can yield performance gains, we argue that it falls short of fully leveraging TSFMs' capabilities, often resulting in overfitting and suboptimal performance. Given the diverse temporal patterns across sampling scales and the inherent multi-scale forecasting capabilities of TSFMs, we adopt a causal perspective to analyze finetuning process, through which we highlight the critical importance of explicitly modeling multiple scales and reveal the shortcomings of naive approaches. Focusing on \textit{encoder-based} TSFMs, we propose \textbf{M}ulti\textbf{\textsc{s}}cale \textbf{\textsc{f}}ine\textbf{\textsc{t}}uning (\textbf{MSFT}), a simple yet general framework that explicitly integrates multi-scale modeling into the finetuning process. Experimental results on three different backbones (\moirai, \moment\ and \units) demonstrate that TSFMs finetuned with MSFT not only outperform naive and typical parameter efficient finetuning methods but also surpass state-of-the-art deep learning methods.
Zhongzheng Qiao、Chenghao Liu、Yiming Zhang、Ming Jin、Quang Pham、Qingsong Wen、P. N. Suganthan、Xudong Jiang、Savitha Ramasamy
计算技术、计算机技术
Zhongzheng Qiao,Chenghao Liu,Yiming Zhang,Ming Jin,Quang Pham,Qingsong Wen,P. N. Suganthan,Xudong Jiang,Savitha Ramasamy.Multi-Scale Finetuning for Encoder-based Time Series Foundation Models[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.14087.点此复制
评论