DP-GPT4MTS: Dual-Prompt Large Language Model for Textual-Numerical Time Series Forecasting
DP-GPT4MTS: Dual-Prompt Large Language Model for Textual-Numerical Time Series Forecasting
Time series forecasting is crucial in strategic planning and decision-making across various industries. Traditional forecasting models mainly concentrate on numerical time series data, often overlooking important textual information such as events and news, which can significantly affect forecasting accuracy. While large language models offer a promise for integrating multimodal data, existing single-prompt frameworks struggle to effectively capture the semantics of timestamped text, introducing redundant information that can hinder model performance. To address this limitation, we introduce DP-GPT4MTS (Dual-Prompt GPT2-base for Multimodal Time Series), a novel dual-prompt large language model framework that combines two complementary prompts: an explicit prompt for clear task instructions and a textual prompt for context-aware embeddings from time-stamped data. The tokenizer generates the explicit prompt while the embeddings from the textual prompt are refined through self-attention and feed-forward networks. Comprehensive experiments conducted on diverse textural-numerical time series datasets demonstrate that this approach outperforms state-of-the-art algorithms in time series forecasting. This highlights the significance of incorporating textual context via a dual-prompt mechanism to achieve more accurate time series predictions.
Chanjuan Liu、Shengzhi Wang、Enqiang Zhu
计算技术、计算机技术
Chanjuan Liu,Shengzhi Wang,Enqiang Zhu.DP-GPT4MTS: Dual-Prompt Large Language Model for Textual-Numerical Time Series Forecasting[EB/OL].(2025-08-06)[2025-08-16].https://arxiv.org/abs/2508.04239.点此复制
评论